You are on page 1of 386

Section III. Drugs Acting on the Central Nervous System Chapter 12.

Neurotransmission and the Central Nervous System


Overview Drugs that act upon the central nervous system (CNS) influence the lives of everyone, every day. These agents are invaluable therapeutically because they can produce specific physiological and psychological effects. Without general anesthetics, modern surgery would be impossible. Drugs that affect the CNS can selectively relieve pain, reduce fever, suppress disordered movement, induce sleep or arousal, reduce the desire to eat, or allay the tendency to vomit. Selectively acting drugs can be used to treat anxiety, mania, depression, or schizophrenia and do so without altering consciousness (see Chapters 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders and 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). The nonmedical self-administration of CNS-active drugs is a widespread practice. Socially acceptable stimulants and antianxiety agents produce stability, relief, and even pleasure for many. However, the excessive use of these and other drugs also can affect lives adversely when their uncontrolled, compulsive use leads to physical dependence on the drug or to toxic side effects, which may include lethal overdosage (see Chapter 24: Drug Addiction and Drug Abuse). The unique quality of drugs that affect the nervous system and behavior places investigators who study the CNS in the midst of an extraordinary scientific challengethe attempt to understand the cellular and molecular basis for the enormously complex and varied functions of the human brain. In this effort, pharmacologists have two major goals: to use drugs to elucidate the mechanisms that operate in the normal CNS and to develop appropriate drugs to correct pathophysiological events in the abnormal CNS. Approaches to the elucidation of the sites and mechanisms of action of CNS drugs demand an understanding of the cellular and molecular biology of the brain. Although knowledge of the anatomy, physiology, and chemistry of the nervous system is far from complete, the acceleration of interdisciplinary research on the CNS has led to remarkable progress. This chapter introduces guidelines and fundamental principles for the comprehensive analysis of drugs that affect the CNS. Specific therapeutic approaches to neurological and psychiatric disorders are discussed in the chapters that follow in this section (see Chapters 13: History and Principles of Anesthesiology, 14: General Anesthetics, 15: Local Anesthetics, 16: Therapeutic Gases: Oxygen, Carbon Dioxide, Nitric Oxide, and Helium, 17: Hypnotics and Sedatives, 18: Ethanol, 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders, 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania, 21: Drugs Effective in the Therapy of the Epilepsies, 22: Treatment of Central Nervous System Degenerative Disorders, 23: Opioid Analgesics, and 24: Drug Addiction and Drug Abuse). Organizational Principles of the Brain The brain is an assembly of interrelated neural systems that regulate their own and each other's activity in a dynamic, complex fashion. Macrofunctions of Brain Regions

The large anatomical divisions provide a superficial classification of the distribution of brain functions. Cerebral Cortex The two cerebral hemispheres constitute the largest division of the brain. Regions of the cortex are classified in several ways: (1) by the modality of information processed (e.g., sensory, including somatosensory, visual, auditory, and olfactory, as well as motor and associational); (2) by anatomical position (frontal, temporal, parietal, and occipital); and (3) by the geometrical relationship between cell types in the major cortical layers ("cytoarchitectonic" classifications). The cerebral cortex exhibits a relatively uniform laminar appearance within any given local region. Columnar sets of approximately 100 vertically connected neurons are thought to form an elemental processing module. The specialized functions of a cortical region arise from the interplay upon this basic module of connections among other regions of the cortex (corticocortical systems) and noncortical areas of the brain (subcortical systems) (seeMountcastle, 1997). Varying numbers of adjacent columnar modules may be functionally, but transiently, linked into larger informationprocessing ensembles. The pathology of Alzheimer's disease, for example, destroys the integrity of the columnar modules and the corticocortical connections (seeMorrison and Hof, 1997; see also Chapter 22: Treatment of Central Nervous System Degenerative Disorders). These columnar ensembles serve to interconnect nested distributed systems in which sensory associations are rapidly modifiable as information is processed (seeMountcastle, 1997; Tononi and Edelman, 1998). Cortical areas termed association areas receive and somehow process information from primary cortical sensory regions to produce higher cortical functions such as abstract thought, memory, and consciousness. The cerebral cortices also provide supervisory integration of the autonomic nervous system, and they may integrate somatic and vegetative functions, including those of the cardiovascular and gastrointestinal systems. Limbic System The "limbic system" is an archaic term for an assembly of brain regions (hippocampal formation, amygdaloid complex, septum, olfactory nuclei, basal ganglia, and selected nuclei of the diencephalon) grouped around the subcortical borders of the underlying brain core to which a variety of complex emotional and motivational functions have been attributed. Modern neuroscience avoids this term, because the components of the limbic system neither function consistently as a system nor are the boundaries of such a system precisely defined. Parts of the limbic system also participate individually in functions that are capable of more precise definition. Thus, the basal ganglia or neostriatum (the caudate nucleus, putamen, globus pallidus, and lentiform nucleus) form an essential regulatory segment of the so-called extrapyramidal motor system. This system complements the function of the pyramidal (or voluntary) motor system. Damage to the extrapyramidal system depresses the ability to initiate voluntary movements and causes disorders characterized by involuntary movements, such as the tremors and rigidity of Parkinson's disease or the uncontrollable limb movements of Huntington's chorea (seeChapter 22: Treatment of Central Nervous System Degenerative Disorders). Similarly, the hippocampus may be crucial to the formation of recent memory, since this function is lost in patients with extensive bilateral damage to the hippocampus. Memory also is disrupted with Alzheimer's disease, which destroys the intrinsic structure of the hippocampus as well as parts of the frontal cortex (see also Squire, 1998). Diencephalon

The thalamus lies in the center of the brain, beneath the cortex and basal ganglia and above the hypothalamus. The neurons of the thalamus are arranged into distinct clusters, or nuclei, which are either paired or midline structures. These nuclei act as relays between the incoming sensory pathways and the cortex, between the discrete regions of the thalamus and the hypothalamus, and between the basal ganglia and the association regions of the cerebral cortex. The thalamic nuclei and the basal ganglia also exert regulatory control over visceral functions; aphagia and adipsia, as well as general sensory neglect, follow damage to the corpus striatum or to selected circuits ending there (seeJones, 1998). The hypothalamus is the principal integrating region for the entire autonomic nervous system, and, among other functions, it regulates body temperature, water balance, intermediary metabolism, blood pressure, sexual and circadian cycles, secretion of the adenohypophysis, sleep, and emotion. Recent advances in the cytophysiological and chemical dissection of the hypothalamus have clarified the connections and possible functions of individual hypothalamic nuclei (Swanson, 1999). Midbrain and Brainstem The mesencephalon, pons, and medulla oblongata connect the cerebral hemispheres and thalamushypothalamus to the spinal cord. These "bridge portions" of the CNS contain most of the nuclei of the cranial nerves, as well as the major inflow and outflow tracts from the cortices and spinal cord. These regions contain the reticular activating system, an important but incompletely characterized region of gray matter linking peripheral sensory and motor events with higher levels of nervous integration. The major monoamine-containing neurons of the brain (see below) are found here. Together, these regions represent the points of central integration for coordination of essential reflexive acts, such as swallowing and vomiting, and those that involve the cardiovascular and respiratory systems; these areas also include the primary receptive regions for most visceral afferent sensory information. The reticular activating system is essential for the regulation of sleep, wakefulness, and level of arousal as well as for coordination of eye movements. The fiber systems projecting from the reticular formation have been called "nonspecific," because the targets to which they project are relatively more diffuse in distribution than those of many other neurons (e.g., specific thalamocortical projection). However, the chemically homogeneous components of the reticular system innervate targets in a coherent, functional manner despite their broad distribution (seeFoote and Aston-Jones, 1995; Usher et al., 1999). Cerebellum The cerebellum arises from the posterior pons behind the cerebral hemispheres. It is also highly laminated and redundant in its detailed cytological organization. The lobules and folia of the cerebellum project onto specific deep cerebellar nuclei, which in turn make relatively selective projections to the motor cortex (by way of the thalamus) and to the brainstem nuclei concerned with vestibular (position-stabilization) function. In addition to maintaining the proper tone of antigravity musculature and providing continuous feedback during volitional movements of the trunk and extremities, the cerebellum also may regulate visceral function (e.g., heart rate, so as to maintain blood flow despite changes in posture). In addition, the cerebellum has been shown in recent studies to play a significant role in learning and memory (seeMiddleton and Strick, 1998). Spinal Cord The spinal cord extends from the caudal end of the medulla oblongata to the lower lumbar vertebrae. Within this mass of nerve cells and tracts, the sensory information from skin, muscles, joints, and viscera is locally coordinated with motoneurons and with primary sensory relay cells that

project to and receive signals from higher levels. The spinal cord is divided into anatomical segments (cervical, thoracic, lumbar, and sacral) that correspond to divisions of the peripheral nerves and spinal column. Ascending and descending tracts of the spinal cord are located within the white matter at the perimeter of the cord, while intersegmental connections and synaptic contacts are concentrated within the H-shaped internal mass of gray matter. Sensory information flows into the dorsal cord, and motor commands exit via the ventral portion. The preganglionic neurons of the autonomic nervous system (seeChapter 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems) are found in the intermediolateral columns of the gray matter. Autonomic reflexes (e.g., changes in skin vasculature with alteration of temperature) easily can be elicited within local segments of the spinal cord, as shown by the maintenance of these reflexes after the cord is severed. Microanatomy of the Brain Neurons operate either within layered structures (such as the olfactory bulb, cerebral cortex, hippocampal formation, and cerebellum) or in clustered groupings (the defined collections of central neurons that aggregate into nuclei). The specific connections between neurons within or across the macrodivisions of the brain are essential to the brain's functions. It is through their patterns of neuronal circuitry that individual neurons form functional ensembles to regulate the flow of information within and between the regions of the brain. Cellular Organization of the Brain Present understanding of the cellular organization of the CNS can be viewed simplistically according to three main patterns of neuronal connectivity (seeShepherd, 1998). Long-hierarchical neuronal connections typically are found in the primary sensory and motor pathways. Here the transmission of information is highly sequential, and interconnected neurons are related to each other in a hierarchical fashion. Primary receptors (in the retina, inner ear, olfactory epithelium, tongue, or skin) transmit first to primary relay cells, then to secondary relay cells, and finally to the primary sensory fields of the cerebral cortex. For motor output systems, the reverse sequence exists with impulses descending hierarchically from motor cortex to spinal motoneuron. This hierarchical scheme of organization provides a precise flow of information, but such organization suffers the disadvantage that destruction of any link incapacitates the entire system. Local-circuit neurons establish their connections mainly within their immediate vicinity. Such local-circuit neurons frequently are small and may have very few processes. They are believed to regulate (i.e., expand or constrain) the flow of information through their small spatial domain. Given their short axons, they may function without generating action potentials, which are essential for the long-distance transmission between hierarchically connected neurons. The neurotransmitters for many local-circuit neurons in most brain regions have been inferred through pharmacological tests (see below). Single-source divergent circuitry is utilized by certain neuronal systems of the hypothalamus, pons, and medulla. From their clustered anatomical location, these neurons extend multiple-branched and divergent connections to many target cells, almost all of which lie outside the brain region in which the neurons are located. Neurons with divergent circuitry can be conceived of as special localcircuit neurons whose spatial domains are one to two orders of magnitude larger than those of the classical intraregional interneurons rather than as sequential elements within any known hierarchical system. For example, neurons of the locus ceruleus project from the pons to the cerebellum, spinal cord, thalamus, and several cortical zones, whose function is only subtly

disrupted when the adrenergic fibers are destroyed experimentally. Abundant data suggest that these systems could mediate linkages between regions that may require temporary integration (seeFoote and Aston-Jones, 1995; Aston-Jones et al., 1999). The neurotransmitters for some of these connections are well known (see below), while others remain to be identified. Cell Biology of Neurons Neurons are classified in many different ways, according to function ( sensory, motor, or interneuron ), location, or identity of the transmitter they synthesize and release. Microscopic analysis focuses on their general shape and, in particular, the number of extensions from the cell body. Most neurons have one axon to carry signals to functionally interconnected target cells. Other processes, termed dendrites, extend from the nerve cell body to receive synaptic contacts from other neurons; these dendrites may branch in extremely complex patterns. Neurons exhibit the cytological characteristics of highly active secretory cells with large nuclei; large amounts of smooth and rough endoplasmic reticulum; and frequent clusters of specialized smooth endoplasmic reticulum (Golgi apparatus), in which secretory products of the cell are packaged into membrane-bound organelles for transport out of the cell body proper to the axon or dendrites (Figure 121). Neurons and their cellular extensions are rich in microtubuleselongated tubules approximately 24 nm in diameter. Their functions may be to support the elongated axons and dendrites and to assist in the reciprocal transport of essential macromolecules and organelles between the cell body and the distant axon or dendrites. Figure 121. Drug-Sensitive Sites in Synaptic Transmission. Schematic view of the drug-sensitive sites in prototypical synaptic complexes. In the center, a postsynaptic neuron receives a somatic synapse (shown greatly oversized) from an axonic terminal; an axoaxonic terminal is shown in contact with this presynaptic nerve terminal. Drug-sensitive sites include: (1) microtubules responsible for bidirectional transport of macromolecules between the neuronal cell body and distal processes; (2) electrically conductive membranes; (3) sites for the synthesis and storage of transmitters; (4) sites for the active uptake of transmitters by nerve terminals or glia; (5) sites for the release of transmitter; (6) postsynaptic receptors, cytoplasmic organelles, and postsynaptic proteins for expression of synaptic activity and for long-term mediation of altered physiological states; and (7) presynaptic receptors on adjacent presynaptic processes and (8) on nerve terminals (autoreceptors). Around the central neuron are schematic illustrations of the more common synaptic relationships in the CNS. (Modified from Bodian, 1972, and Cooper et al., 1996, with permission.)

The sites of interneuronal communication in the CNS are termed synapses (see below). Although synapses are functionally analogous to "junctions" in the somatic motor and autonomic nervous systems, the central junctions are characterized morphologically by various additional forms of paramembranous deposits of specific proteins (essential for transmitter release, response, and catabolism; seeLiu and Edwards, 1997; Geppert and Sdhof, 1998). These specialized sites are presumed to be the active zone for transmitter release and response. The paramembranous proteins constitute a specialized junctional adherence zone, termed the synaptolemma(seeBodian, 1972). Like peripheral "junctions," central synapses also are denoted by accumulations of tiny (500 to 1500 ) organelles, termed synaptic vesicles. The proteins of these vesicles have been shown to have specific roles in transmitter storage, vesicle docking onto presynaptic membranes, voltage- and Ca2+-dependent secretion (seeChapter 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems), and recycling and restorage of released transmitter (seeAugustine et al., 1999). Synaptic Relationships Synaptic arrangements in the CNS fall into a wide variety of morphological and functional forms that are specific for the cells involved. Many spatial arrangements are possible within these highly individualized synaptic relationships (seeFigure 121). The most common arrangement, typical of

the hierarchical pathways, is the axodendritic or axosomatic synapse in which the axons of the cell of origin make their functional contact with the dendrites or cell body of the target. In other cases, functional contacts may occur more rarely between adjacent cell bodies (somasomatic) or overlapping dendrites (dendrodendritic). Some local-circuit neurons can enter into synaptic relationships through modified dendrites, telodendrites, that can be either presynaptic or postsynaptic. Within the spinal cord, serial axoaxonic synapses are relatively frequent. Here, the axon of an interneuron ends on the terminal of a long-distance neuron as that terminal contacts a dendrite in the dorsal horn. Many presynaptic axons contain local collections of typical synaptic vesicles with no opposed specialized synaptolemma (termed boutons en passant). Release of transmitter may not occur at such sites. The bioelectric properties of neurons and junctions in the CNS generally follow the outlines and details already described for the peripheral autonomic nervous system (seeChapter 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems). However, in the CNS there is found a much more varied range of intracellular mechanisms (Nicoll et al., 1990; Tzounopoulos et al., 1998). Supportive Cells Neurons are not the only cells in the CNS. According to most estimates, neurons are outnumbered, perhaps by an order of magnitude, by the various nonneuronal supportive cellular elements (seeCherniak, 1990). Nonneuronal cells include the macroglia, microglia, the cells of the vascular elements (including the intracerebral vasculature as well as the cerebrospinal fluid-forming cells of the choroid plexus found within the intracerebral ventricular system), and the meninges, which cover the surface of the brain and comprise the cerebrospinal fluid-containing envelope. Macroglia are the most abundant supportive cells; some are categorized as astrocytes (nonneuronal cells interposed between the vasculature and the neurons, often surrounding individual compartments of synaptic complexes). Astrocytes play a variety of metabolic support roles including furnishing energy intermediates and supplementary removal of excessive extracellular neurotransmitter secretions (seeMagistretti et al., 1995). A second prominent category of macroglia are the myelinproducing cells, the oligodendroglia. Myelin, made up of multiple layers of their compacted membranes, insulates segments of long axons bioelectrically and accelerates action-potential conduction velocity. Microglia are relatively uncharacterized supportive cells believed to be of mesodermal origin and related to the macrophage/ monocyte lineage (seeAloisi, 1999; GonzlezScarano and Baltuch, 1999). Some microglia are resident within the brain, while additional cells of this class may be attracted to the brain during periods of inflammation following either microbial infection or other postinjury inflammatory reactions. The response of the brain to inflammation differs strikingly from that of other tissues (seeAndersson et al., 1992; Raber et al., 1998; Schnell et al., 1999) and may in part explain its unique reactions to trauma (see below). BloodBrain Barrier Apart from the exceptional instances in which drugs are introduced directly into the CNS, the concentration of the agent in the blood after oral or parenteral administration will differ substantially from its concentration in the brain. Although not thoroughly defined anatomically, the bloodbrain barrier is an important boundary between the periphery and the CNS in the form of a permeability barrier to the passive diffusion of substances from the bloodstream into various regions of the CNS (seePark and Cho, 1991; Rubin and Staddon, 1999). Evidence of the barrier is provided by the greatly diminished rate of access of chemicals from plasma to the brain (seeChapter 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination). This barrier is much less prominent in the hypothalamus and in several small, specialized organs lining

the third and fourth ventricles of the brain: the median eminence, area postrema, pineal gland, subfornical organ, and subcommissural organ. In addition, there is little evidence of a barrier between the circulation and the peripheral nervous system (e.g., sensory and autonomic nerves and ganglia). While severe limitations are imposed on the diffusion of macromolecules, selective barriers to permeation also exist for small charged molecules such as neurotransmitters, their precursors and metabolites, and some drugs. These diffusional barriers are at present best thought of as a combination of the partition of solute across the vasculature (which governs passage by definable properties such as molecular weight, charge, and lipophilicity) and the presence or absence of energy-dependent transport systems. Active transport of certain agents may occur in either direction across the barriers. The diffusional barriers retard the movement of substances from brain to blood as well as from blood to brain. The brain clears metabolites of transmitters into the cerebrospinal fluid by excretion through the acid transport system of the choroid plexus (seeCserr and Bundgaard, 1984; Strange, 1993). Substances that rarely gain access to the brain from the bloodstream often can reach the brain after injection directly into the cerebrospinal fluid. Under certain conditions, it may be possible to open the bloodbrain barrier, at least transiently, to permit the entry of chemotherapeutic agents (seeEmerich et al., 1998; Granholm et al., 1998; LeMay et al., 1998, for discussion). Cerebral ischemia and inflammation also modify the bloodbrain barrier, resulting in increased access to substances that ordinarily would not affect the brain. Response to Damage: Repair and Plasticity in the CNS Because the neurons of the CNS are terminally differentiated cells, they do not undergo proliferative responses to damage, although recent evidence suggests the possibility of neural stemcell proliferation as a natural means for selected neuronal replacement (seeGage, 2000). As a result, neurons have evolved other adaptive mechanisms to provide for maintenance of function following injury. These adaptive mechanisms endow the brain with considerable capacity for structural and functional modification well into adulthood (seeYang et al., 1994; Jones et al., 2000), and they may represent some of the mechanisms employed in the phenomena of memory and learning (seeKandel and O'Dell, 1992). Recent studies have shown that molecular signaling processes employed during brain development also may be involved in the plasticity seen in the adult brain, relying on specific neurotrophic agents (seeBothwell, 1995; Casaccia-Bonnefil et al., 1998; Chao et al., 1998); see below).

Integrative Chemical Communication in the Central Nervous System The capacity to integrate information from a variety of external and internal sources epitomizes the cardinal role of the CNS, namely to optimize the needs of the organism within the demands of the individual's environment. These integrative concepts transcend individual transmitter systems and emphasize the means by which neuronal activity is normally coordinated. Only through a detailed understanding of these integrative functions, and their failure in certain pathophysiological conditions, can effective and specific therapeutic approaches be developed for neurological and psychiatric disorders. The identification of molecular and cellular mechanisms of neural integration is productively linked to clinical therapeutics, because untreatable diseases and unexpected nontherapeutic side effects of drugs often reveal ill-defined mechanisms of pathophysiology. Such observations can then drive the search for novel mechanisms of cellular regulation. The capacity to link molecular processes to behavioral operations, both normal and pathological, provides one of the most exciting aspects of modern neuropharmacological research. A central underlying concept of neuropsychopharmacology is that drugs that influence behavior and improve the functional status of patients with neurological or psychiatric diseases act by enhancing or blunting the effectiveness

of specific combinations of synaptic transmitter actions. Four research strategies provide the neuroscientific substrates of neuropsychological phenomena: molecular, cellular, multicellular (or systems), and behavioral. The intensively exploited molecular level has been the traditional focus for characterizing drugs that alter behavior. Molecular discoveries provide biochemical probes for identifying the appropriate neuronal sites and their mediative mechanisms. Such mechanisms include: (1) the ion channels, which provide for changes in excitability induced by neurotransmitters; (2) the neurotransmitter receptors (see below); (3) the auxiliary intramembranous and cytoplasmic transductive molecules that couple these receptors to intracellular effectors for short-term changes in excitability and for longer-term regulation e.g., through alterations in gene expression (seeNeyroz et al., 1993; Gudermann et al., 1997); (4) transporters for the conservation of released transmitter molecules by reaccumulation into nerve terminals, and then into synaptic vesicles (Blakely et al., 1994; Amara and Sonders, 1998; Fairman and Amara, 1999). Transport across vesicle membranes utilizes a transport protein distinct from that involved in reuptake into nerve terminals (Liu and Edwards, 1997). Research at the molecular level also provides the pharmacological tools to verify the working hypotheses of other molecular, cellular, and behavioral strategies and allows for a means to pursue their genetic basis. Thus, the most basic cellular phenomena of neurons now can be understood in terms of such discrete molecular entities. It has been known for some time that the basic excitability of neurons is achieved through modifications of the ion channels that all neurons express in abundance in their plasma membranes. However, it is now possible to understand precisely how the three major cations, Na+, K+, and Ca 2+, as well as the Clanion are regulated in their flow through highly discriminative ion channels (seeFigures 122 and 123). The voltage-dependent ion channels (Figure 122), which are contrasted with the "ligand-gated ion channels" (Figure 123), provide for rapid changes in ion permeability. These rapid changes underlie the rapid propagation of signals along axons and dendrites, and for the excitation-secretion coupling that releases neurotransmitters from presynaptic sites (Catterall, 1988, 1993). Cloning, expression, and functional assessment of constrained molecular modifications have defined conceptual chemical similarities among the major + 2+ cation channels (seeFigure 122A). The intrinsic membrane-embedded domains of the Na and Ca channels are envisioned as four tandem repeats of a putative six-transmembrane domain, while the + K channel family contains greater molecular diversity. X-ray crystallography has now confirmed these configurations for the K+ channel (Doyle et al., 1998). One structural form of voltageregulated K+ channels, shown in Figure 122C, consists of subunits composed of a single putative + six-transmembrane domain. The inward rectifier K channel structure, in contrast, retains the general configuration corresponding to transmembrane spans 5 and 6 with the interposed "pore region" that penetrates only the exofacial surface membrane. These two structural categories of K+ channels can form heteroligomers, giving rise to multiple possibilities for regulation by voltage, neurotransmitters, assembly with intracellular auxiliary proteins, or posttranslational modifications (Krapivinsky et al., 1995). The structurally defined channel molecules (see Jan et al., 1997; Doyle et al., 1998) now can be examined to determine how drugs, toxins, and imposed voltages alter the excitability of a neuron, permitting a cell either to become spontaneously active or to die through prolonged opening of such channels (seeAdams and Swanson, 1994). Within the CNS, variants of the K+ channels (the delayed rectifier, the Ca2+-activated K + channel, and the afterhyperpolarizing K+ channel) regulated by intracellular second messengers repeatedly have been shown to underlie complex forms of synaptic modulation (seeNicoll, et al., 1990; Malenka and Nicoll, 1999).

Figure 122. The Major Molecular Motifs of Ion Channels That Establish and Regulate Neuronal Excitability in the CNS. A. The 2+ subunits of the Ca + and Na channels share a similar presumptive sixtransmembrane structure, repeated four times, in which an intramembranous segment separates transmembrane segments 5 and 6. B. The Ca2+ channel also requires several auxiliary small proteins ( 2, , , and ). The 2and subunits are linked by a disulfide bond (not shown). Regulatory subunits also exist for Na+channels. C. Voltage-sensitive K + channels (Kv) and the rapidly activating K+ channel (Ka ) share a similar presumptive six-transmembrane domain currently indistinguishable in overall configuration to one repeat unit within the Na+and 2+ Ca channel structure, while the + inwardly rectifying K channel protein (Kir) retains the general configuration of just loops 5 and 6. Regulatory subunits can alter Kvchannel functions. Channels of these two overall

motifs can form heteromultimers (Krapivinsky et al., 1995). Figure 123. Ionophore Receptors for Neurotransmitters Are Composed of Subunits with Four Presumptive Transmembrane Domains and Are Assembled As Tetramers or Pentamers (at Right). The predicted motif shown likely describes nicotinic cholinergic receptors for ACh, GABAA receptors for gamma-aminobutyric acid, 5HT3 receptors for serotonin, and receptors for glycine. Ionophore receptors for glutamate, however, probably are not accurately represented by this schematic motif.

Research at the cellular level determines which specific neurons and which of their most proximate synaptic connections may mediate a behavior or the behavioral effects of a given drug. For example, research at the cellular level into the basis of emotion exploits both molecular and behavioral leads to determine the most likely brain sites at which behavioral changes pertinent to emotion can be analyzed. Such research provides clues as to the nature of the interactions in terms of interneuronal communication (i.e., excitation, inhibition, or more complex forms of synaptic interaction; seeAston-Jones et al., 1999; Brown et al., 1999). An understanding at the systems level is required to assemble the descriptive structural and functional properties of specific central transmitter systems, linking the neurons that make and release this transmitter to the possible effects of this release at the behavioral level. While many such transmitter-to-behavior linkages have been postulated, it has proven difficult to validate the essential involvement of specific transmitter-defined neurons in the mediation of specific mammalian behavior. Research at the behavioral level often can illuminate the integrative phenomena that link populations of neurons (often through operationally or empirically defined ways) into extended specialized circuits, ensembles, or more pervasively distributed systems that integrate the physiological expression of a learned, reflexive, or spontaneously generated behavioral response. The entire concept of animal models of human psychiatric diseases rests on the assumption that scientists can appropriately infer from observations of behavior and physiology (heart rate, respiration, locomotion, etc.) that the states experienced by animals are equivalent to the emotional states experienced by human beings expressing similar physiological changes (seeKandel, 1998). Identification of Central Transmitters An essential step in understanding the functional properties of neurotransmitters within the context of the circuitry of the brain is to identify which substances are the transmitters for specific interneuronal connections. The criteria for the rigorous identification of central transmitters require the same data used to establish the transmitters of the autonomic nervous system (seeChapter 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems). 1. The transmitter must be shown to be present in the presynaptic terminals of the synapse and in

the neurons from which those presynaptic terminals arise. Extensions of this criterion involve the demonstration that the presynaptic neuron synthesizes the transmitter substance, rather than simply storing it after accumulation from a nonneural source. Microscopic cytochemistry with antibodies or in situ hybridization, subcellular fractionation, and biochemical analysis of brain tissue are particularly suited to satisfy this criterion. These techniques often are combined in experimental animals with the production of surgical or chemical lesions of presynaptic neurons or their tracts to demonstrate that the lesion eliminates the proposed transmitter from the target region. Detection of the mRNA for receptors within postsynaptic neurons using molecular biological methods can strengthen the satisfaction of this criterion. 2. The transmitter must be released from the presynaptic nerve concomitantly with presynaptic nerve activity. This criterion is best satisfied by electrical stimulation of the nerve pathway in vivo and collection of the transmitter in an enriched extracellular fluid within the synaptic target area. Demonstrating release of a transmitter used to require sampling for prolonged intervals, but modern approaches employ minute microdialysis tubing or microvoltametric electrodes capable of sensitive detection of amine and amino acid transmitters within spatially and temporally meaningful dimensions (seeParsons and Justice, 1994; Humpel et al., 1996). Release of transmitter also can be studied in vitro by ionic or electrical activation of thin brain slices or subcellular fractions that are enriched in nerve terminals. The release of all transmitter substances so far studied, including presumptive transmitter release from dendrites (Morris et 2+ al., 1998), is voltage-dependent and requires the influx of Ca into the presynaptic terminal. However, transmitter release is relatively insensitive to extracellular Na+or to tetrodotoxin, which blocks transmembrane movement of Na+. 3. When applied experimentally to the target cells, the effects of the putative transmitter must be identical to the effects of stimulating the presynaptic pathway. This criterion can be met loosely by qualitative comparisons (e.g., both the substance and the pathway inhibit or excite the target cell). More convincing is the demonstration that the ionic conductances activated by the pathway are the same as those activated by the candidate transmitter. Alternatively, the criterion can be satisfied less rigorously by demonstration of the pharmacological identity of receptors. In general, pharmacological antagonism of the actions of the pathway and those of the candidate transmitter should be achieved by similar doses of the same drug. To be convincing, the antagonistic drug should not affect responses of the target neurons to other unrelated pathways or to chemically distinct transmitter candidates. Actions that are qualitatively identical to those that follow stimulation of the pathway also should be observed with synthetic agonists that mimic the actions of the transmitter. Other studies, especially those that have implicated peptides as transmitters in the central and peripheral nervous systems, suggest that many brain and spinal cord synapses contain more than one transmitter substance (seeHkfelt et al., 2000). Although rigorous proof is lacking, substances that coexist in a given synapse are presumed to be released together and to act jointly on the postsynaptic membrane (seeDerrick and Martinez, 1994; Jin and Chavkin, 1999). Clearly, if more than one substance transmits information, no single agonist necessarily would provide faithful mimicry, nor would an antagonist provide total antagonism of activation of a given presynaptic element. CNS Transmitter Discovery Strategies The earliest transmitters considered for central roles were acetylcholine and norepinephrine, largely because of their established roles in the somatic motor and autonomic nervous systems. In the 1960s, serotonin, epinephrine, and dopamine also were investigated as potential CNS transmitters. Histochemical as well as biochemical and pharmacological data yielded results consistent with roles as neurotransmitters, but complete satisfaction of all criteria was not achieved. In the early 1970s,

the availability of selective and potent antagonists of gamma-aminobutyric acid (GABA), glycine, and glutamate, all known to be enriched in brain, led to their acceptance as transmitter substances in general. Also at this time, a search for hypothalamic-hypophyseal factors led to an improvement in the technology to isolate, purify, sequence, and synthetically replicate a growing family of neuropeptides (seeHkfelt, et al., 2000, for an overview). This advance, coupled with the widespread application of immunohistochemistry, strongly supported the view that neuropeptides may act as transmitters. Adaptation of bioassay technology from studies of pituitary secretions to other effectors (such as smooth-muscle contractility and, later, ligand-displacement assays) gave rise to the discovery of endogenous peptide ligands for drugs acting at opiate receptors (seeChapter 23: Opioid Analgesics). The search for endogenous factors whose receptors constituted the drugbinding sites was extended later to the benzodiazepine receptors (Costa and Guidotti, 1991). A more recent extension of this strategy has identified a series of endogenous lipid amides as the natural ligands for the tetrahydrocannabinoid receptors (seePiomelli et al., 1998). Assessment of Receptor Properties Until quite recently, central synaptic receptors were characterized either by examination of their ability to bind radiolabeled agonists or antagonists (and on the ability of other unlabeled compounds to compete for such binding sites) or by electrophysiological or biochemical consequences of receptor activation of neurons in vivo or in vitro. Radioligand-binding assays can quantify binding sites within a region, track their appearance throughout the phylogenetic scale and during brain development, and evaluate how physiological or pharmacological manipulation regulates receptor number or affinity (seeDumont et al., 1998; Redrobe et al., 1999, for examples). The properties of the cellular response to the transmitter can be studied electrophysiologically by the use of microiontophoresis (involving recording from single cells and highly localized drug administration). The patch-clamp technique can be used to study the electrical properties of single ionic channels and their regulation by neurotransmitters. These direct electrophysiological tests of neuronal responsiveness can provide qualitative and quantitative information on the effects of a putative transmitter substance (seeJardemark et al., 1998, for recent examples). Receptor properties also can be studied biochemically when the activated receptor is coupled to an enzymatic reaction, such as the synthesis of a second messenger and the ensuing biochemical changes measured. In the current era, molecular biological techniques have led to identification of mRNAs (or cDNAs) for the receptors for virtually every natural ligand considered as a neurotransmitter. A common practice is to introduce these coding sequences into test cells (frog oocytes or mammalian cells) and to assess the relative effects of ligands and of second-messenger production in such cells. Molecular cloning studies have revealed two major (seeFigures 123 and 124) and one minor molecular motif of transmitter receptors. Oligomeric ion-channel receptors composed of multiple subunits usually have four putative "transmembrane domains" consisting of 20 to 25 generally hydrophobic amino acids (seeFigure 123). The ion channel receptors (called ionotropic receptors) for neurotransmitters contain sites for reversible phosphorylation by protein kinases and phosphoprotein phosphatases and for voltage-gating. Receptors with this structure include nicotinic cholinergic (or nicotinic acetylcholine) receptors (seeChapters 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship between Drug Concentration and Effect and 7: Muscarinic Receptor Agonists and Antagonists); the receptors for the amino acids GABA, glycine, glutamate, and aspartate, and for the 5-HT3 receptor (seeChapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists).

Figure 124. G proteinCoupled Receptors Are Composed of a Single Subunit, with Seven Presumptive Transmembrane Domains. For small neurotransmitters, the binding pocket is buried within the bilayer; sequences in the second cytoplasmic loop and projecting out of the bilayer at the base of transmembrane spans 5 and 6 have been implicated in agonist-facilitated G protein coupling (seeChapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect).

The second major structural motif for transmitter receptors is manifest by G proteincoupled receptors (GPCR), in which a monomeric receptor has seven putative transmembrane domains, with varying intra- and extracytoplasmic loop lengths (seeFigure 124). Multiple mutagenesis strategies have defined how the activated receptors (themselves subject to reversible phosphorylation at one or more functionally distinct sites) can interact with the heterotrimeric GTP-binding protein complex to ultimately activate, inhibit, or otherwise regulate enzymatic effector systems, e.g., adenylyl cyclase or phospholipase C, or ion channels, such as voltage-gated Ca2+ channels or receptor-operated K+ channels (seeFigure 21 and related text in Chapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect). The GPCR family includes muscarinic cholinergic receptors, GABABand metabotropic glutamate receptors, and all other aminergic and peptidergic receptors. By transfecting "null cells" with uncharacterized GPCR mRNAs, novel neuropeptides have been identified (seeReinscheid et al., 1995). A third receptor motif is represented by cell-surface receptors whose cytoplasmic domains possess catalytic activities, in particular, guanylyl cyclase (seeChapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect). An additional molecular motif expressed within the CNS involves the transporters that remove transmitters after secretion by an ion-dependent reuptake process (Figure 125). Transporters exhibit a molecular motif with 12 hypothetical transmembrane domains similar to glucose transporters and to mammalian adenylyl cyclase (seeTang and Gilman, 1992). Figure 125. Predicted Structural Motif for Neurotransmitter Transporters. Transporters for the conservation of released amino acid or amine transmitters all share a presumptive twelve-transmembrane domain structure, although the exact orientation of the amino terminus is not clear. Transporters for amine transmitters found on synaptic vesicles also share a presumptive twelvetransmembrane domain structure, but one which is distinct from the transporters of the plasma membrane.

Postsynaptic receptivity of CNS neurons is regulated continuously in terms of the number of receptor sites and the threshold required for generation of a response. Receptor number often is dependent on the concentration of agonist to which the target cell is exposed. Thus, chronic excess of agonist can lead to a reduced number of receptors (desensitization or down-regulation) and consequently to subsensitivity or tolerance to the transmitter. A deficit of transmitter can lead to increased numbers of receptors and supersensitivity of the system. These adaptive processes become especially important when drugs are used to treat chronic illness of the CNS. With prolonged periods of exposure to drug, the actual mechanisms underlying the therapeutic effect may differ strikingly from those that operate when the agent is first introduced into the system. Similar adaptive modifications of neuronal systems also can occur at presynaptic sites, such as those concerned with transmitter synthesis, storage, reuptake, and release. Neurotransmitters, Neurohormones, and Neuromodulators: Contrasting Principles of Neuronal Regulation Neurotransmitters Satisfaction of the experimental criteria for identification of synaptic transmitters can lead to the conclusion that a substance contained in a neuron is secreted by that neuron to transmit information

to its postsynaptic target. Given a definite effect of neuron A on target cell B, a substance found in or secreted by neuron A and producing the effect of A on B operationally would be the transmitter from A to B. In some cases, transmitters may produce minimal effects on bioelectric properties yet activate or inactivate biochemical mechanisms necessary for responses to other circuits. Alternatively, the action of a transmitter may vary with the context of ongoing synaptic events enhancing excitation or inhibition, rather than operating to impose direct excitation or inhibition (seeBourne and Nicoll, 1993). Each chemical substance that fits within the broad definition of a transmitter may, therefore, require operational definition within the spatial and temporal domains in which a specific cell-cell circuit is defined. Those same properties may or may not be generalized to other cells that are contacted by the same presynaptic neurons, with the differences in operation related to differences in postsynaptic receptors and the mechanisms by which the activated receptor produces its effect. Classically, electrophysiological signs of the action of a bona fide transmitter fall into two major categories: (1) excitation, in which ion channels are opened to permit net influx of positively charged ions, leading to depolarization with a reduction in the electrical resistance of the membrane; and (2) inhibition, in which selective ion movements lead to hyperpolarization, also with decreased membrane resistance. More recent work suggests there may be many "nonclassical" transmitter mechanisms operating in the CNS. In some cases, either depolarization or hyperpolarization is accompanied by a decreased ionic conductance (increased membrane resistance) as actions of the transmitter lead to the closure of ion channels (so-called leak channels) that normally are open in some resting neurons (Shepherd, 1998). For some transmitters, such as monoamines and certain peptides, a "conditional" action may be involved. That is, a transmitter substance may enhance or suppress the response of the target neuron to classical excitatory or inhibitory transmitters while producing little or no change in membrane potential or ionic conductance when applied alone. Such conditional responses have been termed modulatory, and specific categories of modulation have been hypothesized (seeNicoll et al., 1990; Foote and AstonJones, 1995). Regardless of the mechanisms that underlie such synaptic operations, their temporal and biophysical characteristics differ substantially from the rapid onset-offset type of effect previously thought to describe all synaptic events. These differences have thus raised the issue of whether or not substances that produce slow synaptic effects should be described with the same termneurotransmitter. Some of the alternative terms and the molecules they describe deserve brief mention with regard to mechanisms of drug action. Neurohormones Peptide-secreting cells of the hypothalamicohypophyseal circuits originally were described as neurosecretory cells, a form of neuron that was both fish and fowl, receiving synaptic information from other central neurons yet secreting transmitters in a hormone-like fashion into the circulation. The transmitter released from such neurons was termed a neurohormonei.e., a substance secreted into the blood by a neuron. However, this term has lost most of its original meaning, because these hypothalamic neurons also may form traditional synapses with central neurons (Hkfelt et al., 1995, 2000). Cytochemical evidence indicates that the same substance that is secreted as a hormone from the posterior pituitary (oxytocin, antidiuretic hormone), mediates transmission at these sites. Thus, the designation hormone relates to the site of release at the pituitary and does not necessarily describe all of the actions of the peptide. Neuromodulators Florey (1967) employed the term modulator to describe substances that can influence neuronal activity in a manner different from that of neurotransmitters. In the context of this definition, the

distinctive feature of a modulator is that it originates from cellular and nonsynaptic sites, yet influences the excitability of nerve cells. Florey specifically designated substances such as CO2and ammonia, arising from active neurons or glia, as potential modulators through nonsynaptic actions. Similarly, circulating steroid hormones, steroids produced in the nervous system (Baulieu, 1998), locally released adenosine and other purines, prostaglandins and other arachidonic acid metabolites, and nitric oxide (NO) (Gally et al., 1990) might all now be regarded as modulators. Neuromediators Substances that participate in the elicitation of the postsynaptic response to a transmitter fall under this heading. The clearest examples of such effects are provided by the involvement of adenosine 3',5'-monophosphate (cyclic AMP), guanosine 3',5'-monophosphate (cyclic GMP), and inositol phosphates as second messengers at specific sites of synaptic transmission (seeChapters 6: Neurotransmission: the Autonomic and Somatic Motor Nervous Systems, 7: Muscarinic Receptor Agonists and Antagonists, 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists, and 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists). However, it is technically difficult to demonstrate in brain that a change in the concentration of cyclic nucleotides occurs prior to the generation of the synaptic potential and that this change in concentration is both necessary and sufficient for its generation. It is possible that changes in the concentration of second messengers can occur and enhance the generation of synaptic potentials. Activation of second messenger-dependent protein phosphorylation reactions can initiate a complex cascade of precise molecular events that regulate the properties of membrane and cytoplasmic proteins that are central to neuronal excitability (Greengard et al., 1999). These possibilities are particularly pertinent to the action of drugs that augment or reduce transmitter effects (see below). Neurotrophic Factors Neurotrophic factors are substances produced within the CNS by neurons, astrocytes, microglia, or transiently invading peripheral inflammatory or immune cells that assist neurons in their attempts to repair damage. Seven categories of peptide factors have been recognized to operate in this fashion (seeBlack, 1999; McKay et al., 1999, for recent reviews): (1) the classic neurotrophins (nerve growth factor, brain-derived neurotrophic factor, and the related neurotrophins); (2) the neuropoietic factors, which have effects both in brain and in myeloid cells [e.g., cholinergic differentiation factor (also called leukemia inhibitory factor), ciliary neurotrophic factor, and some interleukins]; (3) growth factor peptides, such as epidermal growth factor, transforming growth factors and , glial-cell line-derived neurotrophic factor, and activin A; (4) the fibroblast growth factors; (5) insulin-like growth factors; (6) platelet-derived growth factors; and (7) axon-guidance molecules, some of which also are capable of affecting cells of the immune system (seeSong and Poo, 1999; Spriggs, 1999). Drugs designed to elicit the formation and secretion of these products as well as to emulate their actions could provide useful adjuncts to rehabilitative treatments. Central Neurotransmitters In examining the effects of drugs on the CNS with reference to the neurotransmitters for specific circuits, attention should be devoted to the general organizational principles of neurons. The view that synapses represent drug-modifiable control points within neuronal networks thus requires explicit delineation of the sites at which given neurotransmitters may operate and the degree of specificity with which such sites may be affected. One principle that underlies the following summaries of individual transmitter substances is the chemical-specificity hypothesis of Dale (1935), which holds that a given neuron releases the same transmitter substance at each of its synaptic terminals. In the face of growing indications that some neurons may contain more than one

transmitter substance (Hkfelt, et al., 1995, 2000), Dale's hypothesis has been modified to indicate that a given neuron will secrete the same set of transmitters from all of its terminals. However, even this theory may require revision. For example, it is not clear whether or not a neuron that secretes a given peptide will process the precursor peptide to the same end product at all of its synaptic terminals. Table 121 provides an overview of the pharmacological properties of the transmitters in the CNS that have been studied extensively. Neurotransmitters are discussed below in terms of the groups of substances within given chemical categories: amino acids, amines, and neuropeptides. Other substances that may participate in central synaptic transmission include purines (such as adenosine and ATP (seeEdwards and Robertson, 1999; Moreau and Huber, 1999; Baraldi et al., 2000), nitric oxide (seeCork et al., 1998), and arachidonic acid derivatives (seeMechoulam et al., 1996; Piomelli, et al., 1998). Amino Acids The CNS contains uniquely high concentrations of certain amino acids, notably glutamate and GABA; these amino acids are extremely potent in their ability to alter neuronal discharge. Initially, physiologists were reluctant to accept these simple substances as central neurotransmitters. Their ubiquitous distribution within the brain and the consistent observation that they produced prompt, powerful, and readily reversible but redundant effects on every neuron tested seemed out of keeping with the extreme heterogeneity of distribution and responsivity seen for other putative transmitters. The dicarboxylic amino acids produced near-universal excitation, and the monocarboxylic -amino acids (e.g., GABA, glycine, -alanine, taurine) produced qualitatively similar and consistent inhibitions (Kelly and Beart, 1975). Following the emergence of selective antagonists to the amino acids, identification of selective receptors and receptor subtypes became possible. Together with the development of methods for mapping the ligands and their receptors, there is now strong evidence and widespread acceptance that the amino acids GABA, glycine, and glutamate are central transmitters. GABA GABA was identified as a unique chemical constituent of brain in 1950, but its potency as a CNS depressant was not immediately recognized. At the crustacean stretch receptor, GABA was identified as the only inhibitory amino acid found exclusively in crustacean inhibitory nerves and the inhibitory potency of extracts of these nerves was accounted for by their content of GABA. Release of GABA correlated with the frequency of nerve stimulation, and application of GABA and inhibitory nerve stimulation produced identical increases of Cl conductance in the muscle, fully satisfying the criteria for identification of GABA as the transmitter for this nerve (for further historical references, seeBloom, 1996). These same physiological and pharmacological properties later were found to be useful models in tests of a role for GABA in the mammalian CNS. Substantial data support the idea that GABA mediates the inhibitory actions of local interneurons in the brain and that GABA also may mediate presynaptic inhibition within the spinal cord. Presumptive GABA-ergic inhibitory synapses have been demonstrated most clearly between cerebellar Purkinje neurons and their targets in Deiter's nucleus; between small interneurons and the major output cells of the cerebellar cortex, olfactory bulb, cuneate nucleus, hippocampus, and lateral septal nucleus; and between the vestibular nucleus and the trochlear motoneurons. GABA also mediates inhibition within the cerebral cortex and between the caudate nucleus and the substantia nigra. GABA-ergic neurons and nerve terminals have been localized with immunocytochemical methods that visualize glutamic acid decarboxylase, the enzyme that catalyzes the synthesis of GABA from glutamic acid, or by in situ hybridization of the mRNAs for this protein. GABA-containing neurons frequently have been found to coexpress

one or more neuropeptides. The most useful drugs for confirmation of GABA-ergic mediation have been bicuculline and picrotoxin; however, many convulsants whose actions previously were unexplained (including penicillin and pentylenetetrazol) also may act as relatively selective antagonists of the action of GABA (Macdonald et al., 1992; Macdonald and Olsen, 1994). Useful therapeutic effects have not yet been obtained through the use of agents that mimic GABA (such as muscimol), inhibit its active reuptake (such as 2,4-diaminobutyrate, nipecotic acid, and guvacine), or alter its turnover (such as aminooxyacetic acid). GABA is the major inhibitory neurotransmitter in the mammalian CNS. Its receptors have been divided into two main types. The more prominent GABA receptor subtype, the GABAA receptor, is a ligand-gated Cl ion channel, an "ionotropic receptor" that is opened after release of GABA from presynaptic neurons. A second receptor, the GABAB receptor, is a member of the GPCR family, as noted above, and is coupled both to biochemical pathways and to regulation of ion channels, a class of receptor generally referred to as "metabotropic" (Grifa et al., 1998; Billinton et al., 1999; Brauner-Osborne and Krogsgaard-Larsen, 1999). The GABAA receptor subunit proteins have been well characterized due to their high abundance and the receptor's role in almost every neuronal circuit. The receptor also has been extensively characterized in its role as the site of action of many neuroactive drugs (seeChapter 17: Hypnotics and Sedatives). Notable among these are benzodiazepines and barbiturates. It has been suggested recently that direct interactions occur between GABAA receptors and anesthetic steroids, volatile anesthetics, and alcohol (Macdonald, Twyman et al., 1992). Based on sequence homology to the first GABAA subunit cDNAs, more than 15 other subunits have been cloned. In addition to these subunits, which are products of separate genes, mRNA splice variants for several subunits have been described. The GABAA receptor, by analogy with the classical ionotropic nicotinic cholinergic receptor, may be either a pentameric or tetrameric protein in which the subunits assemble together around a central ion pore, a structural format typical for all ionotropic receptors. Abundant evidence has shown that there are multiple subtypes of GABAA receptors in the brain. The existence of subtypes was first suggested by pharmacological differences. It is now known that receptors composed of particular subunits have distinct pharmacological properties (Barnard et al., 1988; Olsen et al., 1990; Seeburg et al., 1990), but the true heterogeneity of GABA A-receptor subtypes has yet to be fully defined. Differences in anatomical distribution of subunits and differences in the time course of development of genes expressing each subunit suggest that there are important functional differences among the subtypes. The subunit composition of the major form of the GABAA receptor contains at least three different subunits , , and . The stoichiometry of these subunits is not known (De Blas, 1996). To interact with benzodiazepines with the profile expected of the native GABAA receptor, the receptor must contain each of these subunits. Inclusion of variant , , or subunits results in receptors with different pharmacological profiles (seeChapter 17: Hypnotics and Sedatives). Glycine Many of the features described for the GABAA receptor family also are features of the inhibitory glycine receptor that is prominent in the brainstem and spinal cord. Multiple subunits have been cloned, and they can assemble into a variety of glycine-receptor subtypes (Grenningloh et al., 1987; Malosio et al., 1991). These pharmacological subtypes are detected in brain tissue with particular neuroanatomical and neurodevelopmental profiles. However, as with the GABAA receptor, the

complete functional significance of the glycine receptor subtypes is not known. Glutamate and Aspartate Glutamate and aspartate are found in very high concentrations in brain, and both amino acids have extremely powerful excitatory effects on neurons in virtually every region of the CNS. Their widespread distribution tended to obscure their roles as transmitters, but there is now broad acceptance of the view that glutamate and possibly aspartate function as the principal fast ("classical") excitatory transmitters throughout the CNS (seeSeeburg, 1993; Cotman et al., 1995; Herrling, 1997). Furthermore, over the past decade, multiple subtypes of receptors for excitatory amino acids have been characterized pharmacologically, based on the relative potencies of synthetic agonists and the discovery of potent and selective antagonists (seeHerrling, 1997). Glutamate receptors, like those for GABA, are classified functionally either as ligand-gated ion channels ("ionotropic" receptors) or as "metabotropic" (G proteincoupled) receptors. Neither the precise number of subunits that assemble to generate a functional glutamate receptor ion channel in vivo nor the topography of each subunit has been established unequivocally (Borges and Dingledine, 1998; Dingledine et al., 1999). The ligand-gated ion channels are further classified according to the identity of agonists that selectively activate each receptor subtype. These receptors include -amino-3-hydroxy-5-methyl-4isoxazole propionic acid (AMPA), kainate, and N-methyl-D aspartate (NMDA) receptors (Borges and Dingledine, 1998; Dingledine et al., 1999). A number of selective antagonists for these receptors now are available (Herrling, 1997). In the case of NMDA receptors, noncompetitive antagonists acting at various sites on the receptor protein have been described in addition to competitive (glutamate site) antagonists. These include open-channel blockers such as phencyclidine (PCP or angel dust), antagonists such as 5,7-dichlorokynurenic acid that act at an allosteric glycine-binding site, and the novel antagonist ifenprodil, which may act as a closedchannel blocker. In addition, the activity of NMDA receptors is sensitive to pH and also can be modulated by a variety of endogenous modulators including Zn2+, some neurosteroids, arachidonic acid, redox reagents, and polyamines such as spermine (for review, seeDingledine et al., 1999). Multiple cDNAs encoding metabotropic receptors and subunits of NMDA, AMPA, and kainate receptors have been cloned in recent years (Borges and Dingledine, 1998; Dingledine et al., 1999). The diversity of gene expression and, consequently, of the protein structure of glutamate receptors also arises by alternative splicing and in some cases by single-base editing of mRNAs encoding the receptors or receptor subunits. Alternative splicing has been described for metabotropic receptors and for subunits of NMDA, AMPA, and kainate receptors (Hollmann and Heinemann, 1994). A remarkable form of endogenous molecular engineering occurs with some subunits of AMPA and kainate receptors in which the RNA sequence differs from the genomic sequence in a single codon of the receptor subunit and determines the extent of Ca2+ permeability of the receptor channel (Traynelis et al., 1995). This RNA-editing process alters the identity of a single amino acid (out of about 900 amino acids) that dictates whether or not the receptor channel gates Ca2+. The glutamate receptor genes seem to be unique families with only limited similarity to other ligand-gated channels such as the nicotinic acetylcholine receptor or, in the case of metabotropic receptors, to members of the GPCR superfamily. AMPA and kainate receptors mediate fast depolarization at most glutamatergic synapses in the brain and spinal cord. NMDA receptors also are involved in normal synaptic transmission, but activation of NMDA receptors is more closely associated with the induction of various forms of

synaptic plasticity rather than with fast point-to-point signaling in the brain. AMPA or kainate receptors and NMDA receptors may be colocalized at many glutamatergic synapses. A wellcharacterized phenomenon that involves NMDA receptors is the induction of long-term potentiation (LTP). LTP refers to a prolonged (hours to days) increase in the size of a postsynaptic response to a presynaptic stimulus of given strength. Activation of NMDA receptors is obligatory for the induction of one type of LTP that occurs in the hippocampus (Bliss and Collingridge, 1993). NMDA receptors normally are blocked by Mg2+ at resting membrane potentials. Thus, activation of NMDA receptors requires not only binding of synaptically released glutamate but simultaneous depolarization of the postsynaptic membrane. This is achieved by activation of AMPA/kainate receptors at nearby synapses from inputs from different neurons. Thus, NMDA receptors may function as coincidence detectors, being activated only when there is simultaneous firing of two or more neurons. Interestingly, NMDA receptors also can induce long-term depression (LTD; the flip side of LTP) at CNS synapses (Malenka and Nicoll, 1998). It seems that the frequency and pattern of synaptic stimulation is what dictates whether a synapse undergoes LTP or LTD (seeMalenka and Nicoll, 1999). Glutamate Excitotoxicity The ability of high concentrations of glutamate to produce neuronal cell death has been known for more than three decades (Olney, 1969), but the mechanisms by which glutamate and selective, rigid agonists of its receptors produce this effect only recently have begun to be clarified. The cascade of events leading to neuronal death initially was thought to be triggered exclusively by excessive activation of NMDA or AMPA/kainate receptors, which allow significant influx of Ca2+ into the neurons. Such glutamate neurotoxicity was thought to underlie the damage that occurs after ischemia or hypoglycemia in the brain, during which a massive release and impaired reuptake of glutamate in the synapse would lead to excess stimulation of glutamate receptors and subsequent cell death. Although NMDA receptor antagonists can attenuate or block neuronal cell death induced by activation of these receptors (seeHerrling, 1997), even the most potent antagonists could not prevent all such damage. More recent studies (seeChoi and Koh, 1998; Lee et al., 1999; Zipfel et + + al., 1999) implicate both local depletion of Na and K , as well as small but significant elevations of 2+ extracellular Zn as factors that can activate both necrotic and proapoptotic cascades (Merry and Korsmeyer, 1997) leading to neuronal death. NMDA receptors also may be involved in the development of susceptibility to epileptic seizures and in the occurrence of seizure activity (Blumcke et al., 1995). Cases of Rasmussen's encephalitis, a childhood disease leading to intractable seizures and dementia, were found to correlate with levels of serum antibodies to a glutamate receptor subunit (Rogers et al., 1994). Because of the widespread distribution of glutamate receptors in the CNS, it is likely that these receptors ultimately will become the targets for diverse therapeutic interventions. For example, a role for disordered glutamatergic transmission in the etiology of chronic neurodegenerative diseases and in schizophrenia has been postulated (Farber et al., 1998; Olney et al., 1999). Acetylcholine After acetylcholine (ACh) was identified as the transmitter at neuromuscular and parasympathetic neuroeffector junctions, as well as at the major synapse of autonomic ganglia (seeChapter 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems), the amine began to receive considerable attention as a potential central neurotransmitter. Based on its irregular distribution within the CNS and the observation that peripheral cholinergic drugs could produce marked behavioral effects after central administration, many investigators were willing to consider the possibility that ACh also might be a central neurotransmitter. In the late 1950s, Eccles and

colleagues demonstrated that the recurrent excitation of spinal Renshaw neurons was sensitive to nicotinic cholinergic antagonists; these cells also were found to be cholinoceptive. Such observations were consistent with the chemical and functional specificity of Dale's hypothesis that all branches of a neuron released the same transmitter substance and, in this case, produced similar types of postsynaptic action (seeEccles, 1964). Although the ability of ACh to elicit neuronal discharge subsequently has been replicated on scores of CNS cells (seeShepherd, 1998), the spinal Renshaw cell remains the prototype for central nicotinic cholinergic synapses. Nevertheless, the search for selectively acting, central nicotinic drugs continues (Decker et al., 1997; Bannon et al., 1998). In most regions of the CNS, the effects of ACh, assessed either by iontophoresis or by radioligand receptordisplacement assays, appear to be generated by interaction with a mixture of nicotinic and muscarinic receptors. Several sets of presumptive cholinergic pathways have been proposed in addition to that of the motoneuron-Renshaw cell. By combination of immunocytochemistry of choline acetyltransferase (ChAT; the enzyme that synthesizes ACh) and ligand binding or in situ hybridization studies for the detection of neurons expressing subunits of nicotinic and muscarinic receptors, eight major clusters of ACh neurons and their pathways have been characterized (Mesulam, 1995). Four separate groups of cell bodies located in the basal forebrain, between the septum and the nucleus basalis of Meynert, send largely autonomous projections to the neocortex, hippocampal formation, and olfactory bulb. While rodent brains exhibit cholinergic neurons that are intrinsic to the neocortex, these neurons are not found in primate brain. Two collections of cholinergic neurons in the upper pons provide the major cholinergic innervation of thalamus and striatum, while medullary cholinergic neurons provide the cholinergic innervation of midbrain and brainstem regions. The intense cholinergic projections to neocortex and hippocampal formation will atrophy if these neurons are deprived of the trophic growth factors provided to them by retrograde axonal transport from their target neurons (Sofroniew et al., 1993). This occurs in Alzheimer's disease when these target neurons are diseased (seeChapter 22: Treatment of Central Nervous System Degenerative Disorders) and has driven therapeutic efforts to restore residual cholinergic signaling. Catecholamines The brain contains separate neuronal systems that utilize three different catecholamines dopamine, norepinephrine, and epinephrine. Each system is anatomically distinct and serves separate, but similar, functional roles within their fields of innervation. Much of the original mapping was performed in rodent brains (Hkfelt et al., 1976, 1977), but recent studies have extended these maps into primates (Foote, 1997; Lewis, 1997). Dopamine Although dopamine originally was regarded only as a precursor of norepinephrine, assays of distinct regions of the CNS eventually revealed that the distributions of dopamine and norepinephrine are markedly different. In fact, more than half the CNS content of catecholamine is dopamine, and extremely large amounts are found in the basal ganglia (especially the caudate nucleus), the nucleus accumbens, the olfactory tubercle, the central nucleus of the amygdala, the median eminence, and restricted fields of the frontal cortex. The anatomical connections of the dopamine-containing neurons are known with some precision. Dopaminergic neurons fall into three major morphological classes: (1) ultrashort neurons within the amacrine cells of the retina and periglomerular cells of the olfactory bulb; (2) intermediate-length neurons within the tuberobasal ventral hypothalamus that innervate the median eminence and

intermediate lobe of the pituitary, connect the dorsal and posterior hypothalamus with the lateral septal nuclei, and extend caudally to the dorsal motor nucleus of the vagus, the nucleus of the solitary tract, and the periaqueductal gray matter; and (3) long projections between the major dopamine-containing nuclei in the substantia nigra and ventral tegmentum and their targets in the striatum, in the limbic zones of the cerebral cortex, and in other major regions of the limbic system except the hippocampus (seeHkfelt, et al., 1976, 1977). At the cellular level, the actions of dopamine depend on receptor subtype expression and the contingent convergent actions of other transmitters to the same target neurons. Although initial pharmacological studies discriminated between two subtypes of dopamine receptors, D1 (by which dopamine activates adenylyl cyclase) and D2 (by which dopamine inhibits adenylyl cyclase), subsequent cloning studies identified at least five genes encoding subtypes of dopamine receptors. Nevertheless, the two major categories, D1-like or D2-like, persist. The D1-like receptors include the D1 and the D5 receptors, whereas the D2-like receptors include the two isoforms of the D2 receptor, differing in the length of their predicted third cytoplasmic loop, dubbed D2short (D2S) and D2long (D2L), the D3, and the D 4 receptors (seeGrandy and Civelli, 1992; Gingrich and Caron, 1993; Civelli, 1994). The D1 and D5 receptors activate adenylyl cyclase. The D2 receptors couple to multiple effector systems, including the inhibition of adenylyl cyclase activity, suppression of Ca2+ currents, and activation of K+ currents. The effector systems to which the D3 and D4 receptors couple have not been unequivocally defined (Sokoloff and Schwartz, 1995; Schwartz et al., 1998). D 2 dopamine receptors have been implicated in the pathophysiology of schizophrenia and Parkinson's disease (seeChapters 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania and 22: Treatment of Central Nervous System Degenerative Disorders). Norepinephrine There are relatively large amounts of norepinephrine within the hypothalamus and in certain zones of the limbic system, such as the central nucleus of the amygdala and the dentate gyrus of the hippocampus. However, this catecholamine also is present in significant, although lower, amounts, in most brain regions. Detailed mapping studies indicate that most noradrenergic neurons arise either in the locus ceruleus of the pons or in neurons of the lateral tegmental portion of the reticular formation. From these neurons, multiple branched axons innervate specific target cells in a large number of cortical, subcortical, and spinomedullary fields (Hkfelt, et al., 1976, 1977; Foote and Aston-Jones, 1995; Foote, 1997). Although norepinephrine has been firmly established as the transmitter at synapses between presumptive noradrenergic pathways and a wide variety of target neurons, a number of features of the mode of action of this biogenic amine have complicated the acquisition of convincing evidence. In large part, these problems reflect its "nonclassical" electrophysiological synaptic actions, which result in "state-dependent" or "enabling" effects. In some instances, the pharmacological properties of such synapses have been complex, with evidence for mediation by both - and -adrenergic receptors. For example, stimulation of the locus ceruleus depresses the spontaneous activity of target neurons in the cerebellum; this is associated with a slowly developing hyperpolarization and a decrease in membrane conductance. However, activation of the locus ceruleus affects the higher firing rates produced by stimulation of excitatory inputs to these neurons to a lesser degree, and excitatory postsynaptic potentials are enhanced. All consequences of activation of the locus ceruleus are emulated by the iontophoretic application of norepinephrine and are effectively blocked by -adrenergic antagonists. Although the mechanisms underlying these effects are not at all clear, there is convincing evidence for intracellular mediation by cyclic AMP. The afferent projections to locus ceruleus neurons include medullary cholinergic neurons, opioid peptide

neurons, raphe (5-HT) neurons, and corticotropin-releasing hormone neurons from the hypothalamus. The latter provides a link to stress reactions for this system (seeAston-Jones et al., 1999). As in the periphery, three families of adrenergic receptors have been described in the CNS (i.e., 1, 2, and ). Subtypes of 1-, 2-, and -adrenergic receptors also exist in the CNS. These subtypes can be distinguished in terms of their pharmacological properties and their distribution (seeChapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists). The three subtypes of -adrenergic receptor are all coupled to stimulation of adenylyl cyclase activity. Even though the proportion varies from region to region, 1-adrenergic receptors may be associated predominantly with neurons, while 2-adrenergic receptors may be more characteristic of glial and vascular elements. The 1 receptors on noradrenergic target neurons of the neocortex and thalamus respond to norepinephrine with prazosinsensitive, depolarizing responses due to decreases in K+ conductances (both voltage-sensitive and voltage-insensitive; seeWang and McCormick, 1993). However, 1 receptors also can augment the generation of cyclic cAMP by neocortical slices in response to vasoactive intestinal polypeptide (Magistretti et al., 1995). 1-Adrenergic receptors also are coupled to stimulation of phospholipase C, leading to release of inositol trisphosphate and diacylglycerol (seeChapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect). 2-Adrenergic receptors are prominent on noradrenergic neurons, where they mediate a hyperpolarizing response due to enhancement of an inwardly rectifying K+ conductance. The latter type of K+ conductance also can be regulated by other transmitter systems (seeFoote and Aston-Jones, 1995; see also Figure 122). In cortical projection fields, 2 receptors may help restore functional declines of senescence (Arnsten, 1993). 2-Adrenergic receptors, like D2dopamine receptors, are coupled to inhibition of adenylyl cyclase activity, but their effects in the CNS likely rely more on their ability to activate receptor-operated K+ channels and to suppress voltage-gated Ca2+ channels, both mediated via pertussis toxin-sensitive G proteins. Based on ligand-binding patterns and the properties of cloned receptors, three subtypes of 2-adrenergic receptor have been defined ( 2A , 2B, and 2C), but all appear to couple to similar signaling pathways (seeBylund, 1992). Functional roles for these receptor subtypes are being defined based on studies on transgenic mice in which these receptors are functionally absent may be revealing (MacDonald et al., 1997). Epinephrine Neurons in the CNS that contain epinephrine were recognized only after the development of sensitive enzymatic assays for phenylethanolamine-N-methyltransferase and immunocytochemical staining techniques for the enzyme (seeHkfelt et al., 1976 and references therein). Epinephrinecontaining neurons are found in the medullary reticular formation and make restricted connections to a few pontine and diencephalic nuclei, eventually coursing as far rostrally as the paraventricular nucleus of the dorsal midline thalamus. Their physiological properties have not been identified. 5-Hydroxytryptamine Following the chemical determination that a biogenic substance found both in serum ("serotonin") and in gut ("enteramine") was 5-hydroxytryptamine (5-HT), assays for this substance revealed its presence in brain (seeChapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists). Since that time, studies of 5-HT have had a pivotal role in advancing our understanding of the neuropharmacology of the CNS. Various cytochemical methods have been used to trace the central anatomy of 5-HT-containing neurons in several species. Tryptaminergic

neurons are found in nine nuclei lying in or adjacent to the midline (raphe) regions of the pons and upper brainstem, corresponding to well-defined nuclear ensembles (Steinbusch and Mulder, 1984). The rostral raphe nuclei innervate forebrain regions, while the caudal raphe nuclei project within the brainstem and spinal cord with some overlaps. The median raphe nucleus makes a major contribution to the innervation of the limbic system, and the dorsal raphe nucleus makes a similar contribution to cortical regions and the neostriatum. In the mammalian CNS, cells receiving cytochemically demonstrable tryptaminergic input, such as the suprachiasmatic nucleus, ventrolateral geniculate body, amygdala, and hippocampus, exhibit a uniform and dense investment of reactive terminals. Molecular biological approaches have led to identification of 14 distinct mammalian 5-HT-receptor subtypes. These subtypes exhibit characteristic ligand-binding profiles, couple to different intracellular signaling systems, exhibit subtype-specific distributions within the CNS, and mediate distinct behavioral effects of 5-HT. Present terminology has grouped the known 5-HT receptor subtypes into multiple classes: the 5-HT1and 5-HT2classes of receptor are both G proteincoupled receptors with a seven-transmembrane-spanning-domain motif and include multiple isoforms within each class, while the 5-HT3 receptor is a ligand-gated ion channel with structural similarity to the subunit of the nicotinic acetylcholine receptor. The 5-HT4, 5-HT5, 5-HT6, and 5-HT7classes of receptor are all apparent GPCRs, but have not yet been well studied electrophysiologically or operationally (Hoyer and Martin, 1996). Structural diversity among these subtypes of receptors indicates that they are representatives of distinct 5-HT receptor classes (seeChapter 11: 5Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists for further discussion of pharmacological properties of 5-HT receptor subtypes). As with all other genetically identified receptors, the new genetic perturbation models are assisting in the specification of function (seeMurphy et al., 1999). The 5-HT1 receptor subset is composed of at least five intronless receptor subtypes (5-HT1A, 5HT1B, 5-HT1D, 5-HT1E, 5-HT1F) that are linked to inhibition of adenylyl cyclase activity or to regulation of K+ or Ca2+ channels. The 5-HT1A receptors are abundantly expressed on 5-HT neurons of the dorsal raphe nucleus, where they are thought to be involved in temperature regulation. They also are found in regions of the CNS associated with mood and anxiety such as the hippocampus and amygdala. Activation of 5-HT1A receptors leads to opening of an inwardly rectifying K + conductance, which leads to hyperpolarization and neuronal inhibition. These receptors can be activated by the drugs buspirone and ipsapirone, which are used to treat anxiety and panic disorders (seeAghajanian, 1995). 5-HT1D receptors are potently activated by the drug sumatriptan, which is currently prescribed for acute management of migraine headaches. Three receptor subtypes constitute the 5-HT2 receptor class: 5-HT2A, 5-HT2B, and 5-HT2C. In contrast to 5-HT1 receptors, these 5-HT2 receptors contain introns and all are linked to activation of phospholipase C. Based on ligand binding and mRNA in situ hybridization patterns, 5-HT2A receptors are enriched in forebrain regions such as neocortex and olfactory tubercle, as well as in several nuclei arising from the brainstem. On facial motoneurons, 5-HT enhances excitability by + two mechanisms: (1) slow closure of resting K conductances, increasing membrane resistance; and + (2) a more potent, ritanserin-antagonizable opening of a voltage-sensitive K conductance that is activated by hyperpolarization (Aghajanian, 1995). In piriform cortex, Aghajanian and colleagues have observed an indirect inhibition of pyramidal neurons through activation of local-circuit, GABA-mediated inhibitory interneurons, an effect that is blocked by ritanserin. In the cerebral cortex, 5-HT2A-receptor agonists (but not 5-HT itself) also produce neuronal inhibition, but other effects reported may be the result of coexpression of multiple 5-HT-receptor subtypes on the same neuron. The 5-HT2C receptor, which is very similar in sequence and pharmacology to the 5-HT2A

receptor, is expressed abundantly in the choroid plexus where it regulates transferrin and cerebrospinal fluid production (seeChapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists). Receptors of the 5-HT3class first were recognized in the peripheral autonomic system. They also are expressed in brain within the area postrema and nucleus tractus solitarius, where they couple to potent depolarizing responses that show rapid desensitization to continued 5-HT exposure. The 5HT3 receptor leads to enhanced Na+ and K+ currents but does not seem to affect Ca2+ permeability. At the behavioral level, actions of 5-HT at central 5-HT3 receptors can lead to emesis and antinociceptive actions; 5-HT3-receptor antagonists such as ondansetron are beneficial in the management of chemotherapy-induced emesis (seeChapter 38: Prokinetic Agents, Antiemetics, and Agents Used in Irritable Bowel Syndrome). The hallucinogen LSD is among the most interesting of the compounds that interact with 5-HT, primarily through 5-HT2 receptors. In iontophoretic tests, LSD and 5-HT are both potent inhibitors of the firing of raphe (5-HT) neurons, whereas LSD and other hallucinogens are far more potent excitants on facial motoneurons that receive innervation from the raphe. The inhibitory effect of LSD on raphe neurons offers a plausible explanation for the drug's hallucinogenic effects, namely, that these effects result from depression of activity in a system that tonically inhibits visual and other sensory inputs. However, typical LSD-induced behavior is still seen in animals with destroyed raphe nuclei or after blockade of the synthesis of 5-HT by p-chlorophenylalanine. Other evidence against this explanation of LSD-induced hallucinations is potentiation of the effects of LSD by administration of the precursor of 5-HT, 5-hydroxytryptophan. More precise definition of the various functional roles of tryptaminergic pathways in the CNS awaits the results of studies using more specific agents, whose number is steadily increasing (seeAghajanian and Marek, 1999). Histamine For many years, histamine and antihistamines that are active in the periphery have been known to produce significant effects on animal behavior. Relatively recently, however, evidence has accumulated to suggest that histamine also might be a central neurotransmitter. Biochemical detection of histamine synthesis by neurons, as well as direct cytochemical localization of these neurons, has established the existence of a histaminergic system in the CNS. Most of these neurons are located in the ventral posterior hypothalamus; they give rise to long ascending and descending tracts to the entire CNS that are typical of the patterns characteristic of other aminergic systems. Based on the presumptive central effects of histamine antagonists, the histaminergic system is thought to function in the regulation of arousal, body temperature, and vascular dynamics. Three subtypes of histamine receptors have been described. H1 receptors, the most prominent, may be located on glia and vessels as well as on neurons and may act to mobilize Ca2+ in receptive cells. H2 receptors are linked to the activation of adenylyl cyclase, perhaps in concert with H1 receptors in certain circumstances. H3 receptors, which have the greatest sensitivity to histamine, are localized much more selectively in basal ganglia and olfactory regions in the rat, but consequences of their activation remain unresolved. Unlike the monoamines and amino acid transmitters, there does not appear to be an active reuptake process for histamine after its release. In addition, no direct evidence has been obtained for release of histamine from neurons either in vivo or in vitro(seeSchwartz et al., 1995, for additional recent references). Peptides The discovery during the 1980s of numerous novel peptides in the CNS, each capable of regulating

one or another aspect of neural function, produced considerable excitement and an imposing catalog of entities (seeHkfelt et al., 1995; Darlison and Richter, 1999). In addition, certain peptides previously thought to be restricted to the gut or to endocrine glands also have been found in the CNS. Relatively detailed neuronal maps are available that show immunoreactivity to peptidespecific antisera. While some CNS peptides may function on their own, most are now thought to act mainly in concert with coexisting transmitters, both amines and amino acids. Some neurons may contain more than two possible transmitters (seeHkfelt et al., 1995), and they can be independently regulated. At this time at least three approaches appear to have utility in attempting to grasp the continuously enlarging peptidergic systems of neurons. Organization by Peptide Families Because of significant homology in amino acid sequences, families of related molecules can be defined as either ancestral or concurrent. The ancestral relationship is illustrated by peptides such as the tachykinin/ substance P or the vasotocin (vasopressin/oxytocin) family, in which species differences can be correlated with modest variations in peptide structure. The concurrent relationship is best exemplified by the endorphins and by the glucagon-secretin family. In the endorphin "superfamily," three major systems of endorphin peptides (proopiomelanocortin, proenkephalin, and prodynorphin) exist in independent neuronal circuits (seeAkil et al., 1998 for recent review). These natural opioid peptides arise from independent, but homologous, genes. The peptides all share some actions at receptors once classed generally as "opioid" but now are undergoing progressive refinement (seeChapter 23: Opioid Analgesics). In the glucagon family, multiple and somewhat homologous peptides are found simultaneously in different cells of the same organism but in separate organ systems: glucagon and vasoactive intestinal polypeptide (VIP) in pancreatic islets; secretin in duodenal mucosa; VIP and related peptides in enteric, autonomic, and central neurons (seeMagistretti et al., 1998); and growth hormone-releasing factor in central neurons only (Guillemin et al., 1984). The general metabolic effects produced by this family can be viewed as leading to increased blood glucose. To some degree, ancestral and concurrent relationships are not mutually exclusive. For example, multiple members of the tachykinin/substance P family within mammalian brains and intestines may account for the apparent existence of subsets of receptors for these peptides (Vanden Broeck et al., 1999). The mammalian terminus of the vasotocin family shows two concurrent products as well, vasopressin and oxytocin, each having evolved to perform separate functions once executed by single vasotocin-related peptides in lower phyla. Organization by Anatomic Pattern Some peptide systems follow rather consistent anatomical organizations. Thus, the hypothalamic peptides oxytocin, vasopressin, proopiomelanocortin, gonadotropin-releasing hormone, and growth hormonereleasing hormone all tend to be synthesized by single large clusters of neurons that give off multibranched axons to several distant targets. Others, such as systems that contain somatostatin, cholecystokinin, and enkephalin, can have many forms, with patterns varying from moderately long, hierarchical connections to short-axon, local-circuit neurons that are widely distributed throughout the brain (seeHkfelt et al., 2000). Organization by Function Since almost all peptides initially were identified on the basis of bioassays, their names reflect these biologically assayed functions (e.g., thyrotropin-releasing hormone, vasoactive intestinal polypeptide). These names become trivial if more ubiquitous distributions and additional functions are discovered. Some general integrative role might be hypothesized for widely separated neurons

(and other cells) that make the same peptide. However, a more parsimonious view would be that each peptide has unique messenger roles at the cellular level and that these are used repeatedly in functionally similar pathways within large systems that differ in their overall functions. The cloning of the major members of the opioid-peptide receptors revealed unexpected and as yet unexplained conservation of sequences with receptors for somatostatin, angiotensin, and other peptides (seeUhl et al., 1994). Comparison with Other Transmitters Peptides differ in several important respects from the monoamine and amino acid transmitters considered earlier. Synthesis of a peptide is performed in the rough endoplasmic reticulum, where mRNA for the propeptide can be translated into an amino-acid sequence. The propeptide is cleaved (processed) to the form that is secreted as the secretory vesicles are transported from the perinuclear cytoplasm to the nerve terminals. Further, no active reuptake mechanisms for peptides have been described (but seeHonor et al., 1999, for possible exception). This increases the dependency of peptidergic nerve terminals on distant sites of synthesis. Perhaps most importantly, linear chains of amino acids can assume many conformations at their receptors, making it difficult to define the sequences and their steric relationships that are critical for activity. Until recently, it has been difficult to develop nonpeptidic, synthetic agonists or antagonists that will interact with specific receptors for peptides. However, such drugs are now being developed (for cholecystokinin CCK1and CCK2 receptors, for neurotensin receptors, and for corticotropinreleasing-hormone receptors), and some (against substance P NK-1 receptors) have entered clinical trials (seeRupniak and Kramer, 1999; Hkfelt et al., 2000). Nature also has had limited success in this regard, since only one plant alkaloid, morphine, has been found to act selectively at peptidergic synapses. Fortunately for pharmacologists, morphine was discovered before the endorphins, or rigid molecules capable of acting at peptide receptors might have been deemed impossible to develop. Other Regulatory Substances In addition to these major families of neurotransmitters, other endogenous substances also may participate in the regulated flow of signals between neurons, but in sequences of events that differ somewhat from the conventional concepts of neurotransmitter function. These substances have significant potential importance as regulatory factors and as targets for future drug development. Purines In addition to their roles as essential biochemical anabolites, adenosine monophosphate, adenosine triphosphate, and free adenosine have gained attention as independent, neuronal signaling molecules in their own right (seeMoreau and Huber, 1999; Williams et al., 1999; Baraldi et al., 2000). Two large families of purinergic receptors have been characterized. Those in the P1 class are GPCRs. These have been further divided into four subtypes (A1A4) based on agonist actions of adenosine; A1 and A2 adenosine receptors are antagonized by xanthines, whereas A3 and A4 adenosine receptors are not. A 1 adenosine receptors have been associated with inhibition of adenylyl cyclase, activation of K+ currents, activation of phospholipase C in some circumstances, and ion-channel regulation, while A2 receptors activate adenylyl cyclase. The P2 class of purine receptors refers to the receptors for ATP and related triphosphate nucleotides such as UTP. The P2X subtype of receptor is a ligand-gated ion channel, while the P2Y subtype is a GPCR. Adenosine can act presynaptically throughout the cortex and hippocampal formation to inhibit the release of amine and amino acid transmitters. ATP-regulated responses have been linked pharmacologically to a variety of supracellular functions including anxiety, stroke, and epilepsy

(Williams, 1995). Diffusible Mediators Certain potent agents shown to be active under pharmacological conditions and inferred to be physiological regulators in systems throughout the body recently have been examined for their roles within the central nervous system. Arachidonic acid, normally stored within the cell membrane as a glycerol ester, can be liberated during phospholipid hydrolysis (by pathways involving phospholipases A2, C, and D). Phospholipases are activated by a variety of receptors. Arachidonic acid can be converted to highly reactive regulators by three major enzymatic pathways (seeChapter 26: Lipid-Derived Autacoids: Eicosanoids and Platelet-Activating Factor): cyclooxygenases (leading to prostaglandins and thromboxanes), lipoxygenases (leading to the leukotrienes and other transient catabolites of eicosatetraenoic acid), and cytochrome P450 (which is inducible although expressed at low levels in brain). These arachidonic acid metabolites have been implicated as diffusible modulators in the CNS, particularly for long-term potentiation and other forms of plasticity (Mechoulam et al., 1996; Piomelli et al., 1998). Nitric oxide has been recognized as an important regulator of vascular and inflammatory mediation for more than a decade, but came into focus with respect to roles in the CNS after successful efforts to characterize brain nitric oxide synthases (NOS; seeSnyder and Dawson, 1995). Molecular cloning studies have now revealed at least four isoforms of this biosynthetic enzyme in the brain, a constitutive form present in some neurons, capillary endothelial cells, and macrophages, as well as inducible forms of the enzyme. The availability of potent activators and inhibitors of NOS has led to reports of the presumptive involvement of nitric oxide in a host of phenomena in the brain including long-term potentiation, guanylyl cyclase activation, neurotransmitter release and reuptake, and enhancement of glutamate (NMDA)-mediated neurotoxicity. Subsequently, rational analysis based on proposed mechanisms of NO action through binding to the iron in the active site of target enzymes led to the idea that carbon monoxide may be a second gaseous, labile, diffusible intercellular regulator, at least in the regulation of guanylyl cyclase in neurons in vitro. Cytokines The term cytokines encompasses a large and diverse family of polypeptide regulators, produced widely throughout the body by cells of diverse embryological origin. In general, these regulators have multiple functions attributed to effects under controlled conditions in vitro. In vivo, the effects of cytokines are known to be further regulated by the conditions imposed by other cytokines, interacting as a network with variable effects leading to synergistic, additive, or opposing actions. Within the cytokines, tissue-produced peptidic factors termed chemokines serve to attract cells of the immune and inflammatory lines into interstitial spaces. These special cytokines have received much attention as potential regulators in nervous system inflammation (as in early stages of dementia, following infection with human immunodeficiency virus; seeAsensio et al., 1999; Mennicken et al., 1999) and during recovery from traumatic injury. The more conventional neuronal and glial-derived growth-enhancing and growth-retardant factors were mentioned above. The fact that, under some pathophysiological conditions, neurons and astrocytes may be induced to express cytokines or other growth factors further blurs the dividing line between neurons and glia. Actions of Drugs in the CNS

Specificity and Nonspecificity of CNS Drug Actions The effect of a drug is considered to be specific when it affects an identifiable molecular mechanism unique to target cells that bear receptors for that drug. Conversely, a drug is regarded as nonspecific when it produces effects on many different target cells and acts by diverse molecular mechanisms. This distinction is often a property of the dose-response relationship of the drug and the cell or mechanisms under scrutiny (seeChapter 3: Principles of Therapeutics). Even a drug that is highly specific when tested at a low concentration may exhibit nonspecific actions at substantially higher doses. Conversely, even generally acting drugs may not act equally on all levels of the CNS. For example, sedatives, hypnotics, and general anesthetics would have very limited utility if central neurons that control the respiratory and cardiovascular systems were sensitive to their actions. Drugs with specific actions may produce nonspecific effects if the dose and route of administration produce high tissue concentrations. Drugs whose mechanisms currently appear to be primarily general or nonspecific are classed according to whether they produce behavioral depression or stimulation. Specifically acting CNS drugs can be classed more definitively according to their locus of action or specific therapeutic usefulness. It must be remembered that the absence of overt behavioral effects does not rule out the existence of important central actions for a given drug. For example, the impact of muscarinic cholinergic antagonists on the behavior of normal animals may be subtle, but these agents are used extensively in the treatment of movement disorders and motion sickness (seeChapter 7: Muscarinic Receptor Agonists and Antagonists). General (Nonspecific) CNS Depressants This category includes the anesthetic gases and vapors, the aliphatic alcohols, and some hypnoticsedative drugs. These agents share the ability to depress excitable tissue at all levels of the CNS, leading to a decrease in the amount of transmitter released by the nerve impulse, as well as to general depression of postsynaptic responsiveness and ion movement. At subanesthetic concentrations, these agents (e.g., ethanol) can exert relatively specific effects on certain groups of neurons, which may account for differences in their behavioral effects, especially the propensity to produce dependence (Koob and Le Moal, 1997; Koob et al., 1998; seealsoChapters 14: General Anesthetics, 17: Hypnotics and Sedatives, 18: Ethanol, and 24: Drug Addiction and Drug Abuse). General (Nonspecific) CNS Stimulants The drugs in this category include pentylenetetrazol and related agents that are capable of powerful excitation of the CNS and the methylxanthines, which have a much weaker stimulant action. Stimulation may be accomplished by one of two general mechanisms: (1) by blockade of inhibition or (2) by direct neuronal excitation (which may involve increased transmitter release, more prolonged transmitter action, labilization of the postsynaptic membrane, or decreased synaptic recovery time). Drugs That Selectively Modify CNS Function The agents in this group may cause either depression or excitation. In some instances, a drug may produce both effects simultaneously on different systems. Some agents in this category have little effect on the level of excitability in doses that are used therapeutically. The principal classes of these CNS drugs include the following: anticonvulsants, antiparkinsonism drugs, opioid and nonopioid analgesics, appetite suppressants, antiemetics, analgesic-antipyretics, certain stimulants,

antidepressants, antimanic agents, antipsychotic agents, sedatives, and hypnotics. Although selectivity of action may be remarkable, a drug usually affects several CNS functions to varying degrees. When only one constellation of effects is wanted in a therapeutic situation, the remaining effects of the drug are regarded as limitations in selectivity (i.e., unwanted side effects). The specificity of a drug's action frequently is overestimated. This is partly due to the fact that the drug is identified with the effect that is implied by the class name. General Characteristics of CNS Drugs Combinations of centrally acting drugs frequently are administered to therapeutic advantage (e.g., an anticholinergic drug and levodopa for Parkinson's disease). However, other combinations of drugs may be detrimental because of potentially dangerous additive or mutually antagonistic effects. The effect of a CNS drug is additive with the physiological state and with the effects of other depressant and stimulant drugs. For example, anesthetics are less effective in a hyperexcitable subject than in a normal patient; the converse is true with respect to the effects of stimulants. In general, the depressant effects of drugs from all categories are additive (e.g., the fatal combination of barbiturates or benzodiazepines with ethanol), as are the effects of stimulants. Therefore, respiration depressed by morphine is further impaired by depressant drugs, while stimulant drugs can augment the excitatory effects of morphine to produce vomiting and convulsions. Antagonism between depressants and stimulants is variable. Some instances of true pharmacological antagonism among CNS drugs are known; for example, opioid antagonists are very selective in blocking the effects of opioid analgesics. However, the antagonism exhibited between two CNS drugs is usually physiological in nature. Thus, an individual who has received one drug cannot be returned entirely to normal by another. The selective effects of drugs on specific neurotransmitter systems may be additive or competitive. This potential for drug interaction must be considered whenever such drugs are administered concurrently. To minimize such interactions, a drug-free period may be required when modifying therapy. An excitatory effect is commonly observed with low concentrations of certain depressant drugs due either to depression of inhibitory systems or to a transient increase in the release of excitatory transmitters. Examples are the "stage of excitement" during induction of general anesthesia and the "stimulant" effects of alcohol. The excitatory phase occurs only with low concentrations of the depressant; uniform depression ensues with increasing drug concentration. The excitatory effects can be minimized, when appropriate, by pretreatment with a depressant drug that is devoid of such effects (e.g., benzodiazepines in preanesthetic medication). Acute, excessive stimulation of the cerebrospinal axis normally is followed by depression, which is in part a consequence of neuronal fatigue and exhaustion of stores of transmitters. Postictal depression is additive with the effects of depressant drugs. Acute, drug-induced depression is not, as a rule, followed by stimulation. However, chronic drug-induced sedation or depression may be followed by prolonged hyperexcitability upon abrupt withdrawal of the medication (barbiturates, alcohol). This type of hyperexcitability can be controlled effectively by the same or another depressant drug (seeChapters 17: Hypnotics and Sedatives and 18: Ethanol). Organization of CNSDrug Interactions The structural and functional properties of neurons provide a means to specify the possible sites at which drugs could interact specifically or generally in the CNS (seeFigure 121). In this scheme,

drugs that affect neuronal energy metabolism, membrane integrity, or transmembrane ionic equilibria would be generally acting compounds. Similarly general in action would be drugs that affect the two-way intracellular transport systems (e.g., colchicine). These general effects still can exhibit different dose-response or time-response relationships based, for example, on such neuronal properties as rate of firing, dependence of discharge on external stimuli or internal pacemakers, resting ionic fluxes, or axon length. In contrast, when drug actions can be related to specific aspects of the metabolism, release, or function of a neurotransmitter, the site, specificity, and mechanism of action of a drug can be defined by systematic studies of dose-response and time-response relationships. From such data the most sensitive, rapid, or persistent neuronal event can be identified. Transmitter-dependent actions of drugs can be organized conveniently into presynaptic and postsynaptic categories. The presynaptic category includes all of the events in the perikaryon and nerve terminal that regulate transmitter synthesis (including the acquisition of adequate substrates and cofactors), storage, release, reuptake, and catabolism. Transmitter concentrations can be lowered by blockade of synthesis, storage, or both. The amount of transmitter released per impulse is generally stable but also can be regulated. The effective concentration of transmitter may be increased by inhibition of reuptake or by blockade of catabolic enzymes. The transmitter that is released at a synapse also can exert actions on the terminal from which it was released by interacting with receptors at these sites (termed autoreceptors;see above). Activation of presynaptic autoreceptors can slow the rate of discharge of transmitter and thereby provide a feedback mechanism that controls the concentration of transmitter in the synaptic cleft. The postsynaptic category includes all the events that follow release of the transmitter in the vicinity of the postsynaptic receptor in particular, the molecular mechanisms by which occupancy of the receptor by the transmitter results in changes in the properties of the membrane of the postsynaptic cell (shifts in membrane potential) as well as more enduring biochemical actions (changes in intracellular cyclic nucleotides, protein kinase activity, and related substrate proteins). Direct postsynaptic effects of drugs generally require relatively high affinity for the receptors or resistance to metabolic degradation. Each of these presynaptic or postsynaptic actions is potentially highly specific and can be envisioned as being restricted to a single, chemically defined subset of CNS cells. Convergence, Synergism, and Antagonism Result from Transmitter Interactions A hallmark of modern neuropharmacology is the capacity to clone receptor or receptor-subunit cDNAs and to determine their properties by expression in cells that do not normally express the receptor or subunit being studied. The simplicity of in vitro models of this type may divert one's attention from the fact that, in the intact CNS, a given neurotransmitter may interact simultaneously with all of the various isoforms of its receptor on neurons that are also under the influence of multiple other afferent pathways and their transmitters. Thus, attempts to predict the behavioral or therapeutic consequences of drugs designed to elicit precise and restricted receptor actions may fail due to differences under normal as compared to diseased conditions, and as a consequence of the complexity of the interactions possible.

Chapter 13. History and Principles of Anesthesiology *Overview

Prior to 1846, attempts to provide comfort during operative procedures were minimally effective and the development of surgery was necessarily limited. William T.G. Morton's public demonstration of ether in that year revolutionized medical care throughout the world. The evolution of anesthesiology as a medical specialty has facilitated the success of modern, complex surgical procedures. Beyond the obtundation of consciousness and creation of a quiescent surgical field, anesthesiology applies principles of physiology, pathophysiology, and pharmacology to assess and reduce surgical risk, maintain homeostasis, attenuate the surgical stress response, and provide analgesia. In this chapter, we explore the salient features of the preoperative, intraoperative, and postoperative periods, highlighting recent discoveries including anesthetic receptor specificity, identification of the neural correlates of consciousness, and new technology to assess levels of awareness.

*History of Surgical Anesthesia


Anesthesia before 1846 Surgical procedures were uncommon before 1846. Understanding of the pathophysiology of disease and of the rationale for its treatment by surgery was rudimentary. Aseptic technique and the prevention of wound infection were almost unknown. In addition, the lack of satisfactory anesthesia was a major deterrent. Because of all these factors, few operations were attempted, and mortality was frequent. Typically, surgery was of an emergency naturefor example, amputation of a limb for open fracture or drainage of an abscess. Fine dissection and careful technique were not possible in patients for whom relief of pain was inadequate. Some means of attempting to relieve surgical pain were available and, in fact, had been used since ancient times. Drugs like alcohol, hashish, and opium derivatives, taken by mouth, provided some consolation. Physical methods for the production of analgesia, such as packing a limb in ice or making it ischemic with a tourniquet, occasionally were used. Unconsciousness induced by a blow to the head or by strangulation did provide relief from pain, although at a high cost. However, the most common method used to achieve a relatively quiet surgical field was simple restraint of the patient by force. It is no wonder that surgery was looked upon as a last resort. Although the analgesic properties of both nitrous oxide and diethyl ether had been known to a few for years, the agents were not used for medical purposes. Nitrous oxide was synthesized by Priestley in 1776, and both he and Humphry Davy some 20 years later commented upon its anesthetic properties (Faulconer and Keys, 1965). Davy in fact suggested that ". . . it may probably be used with advantage during surgical operations in which no great effusion of blood takes place." Another 20 years passed before Michael Faraday wrote that the inhalation of diethyl ether produced effects similar to those of nitrous oxide. However, except for their inhalation in carnival exhibitions or to produce "highs" at "ether frolics," these drugs were not used in human beings until the mid nineteenth century. Greene (1971) has presented an analysis of the reasons for the introduction of anesthesia in the 1840s. The time was then right, since concern for the well-being of one's fellows, a humanitarian attitude, was more prevalent than it had been in the previous century. "So long as witches were being burned in Salem, anesthesia could not be discovered 20 miles away in Boston." While humanitarian concern extended to the relief of pain, chemistry and medicine had simultaneously advanced to such an extent that a chemically pure drug could be prepared and then used with some degree of safety. There was, too, growth of the inquisitive spirita search for improvement of the

human condition. Public Demonstration of Ether Anesthesia Dentists were instrumental in the introduction of both diethyl ether and nitrous oxide. They, even more than physicians, came into daily contact with persons complaining of pain; often, as a byproduct of their work, they produced pain. It was at a stage show that Horace Wells, a dentist, noted that one of the participants, while under the influence of nitrous oxide, injured himself yet felt no pain. The next day Wells, while breathing nitrous oxide, had one of his own teeth extracted, painlessly, by a colleague. Shortly thereafter, in 1845, Wells attempted to demonstrate his discovery at the Massachusetts General Hospital in Boston. Unfortunately the patient cried out during the operation, and the demonstration was deemed a failure. William T. G. Morton, a Boston dentist (and medical student), was familiar with the use of nitrous oxide from a previous association with Horace Wells. Morton learned of ether's anesthetic effects, thought it more promising, and practiced with it on animals and then on himself. Finally, he asked permission to demonstrate the drug's use, publicly, as a surgical anesthetic. The story of this classical demonstration in 1846 has been retold countless times. The operating room ("ether dome") at the Massachusetts General Hospital remains as a memorial to the first public demonstration of surgical anesthesia. In the gallery of this room skeptical spectators gathered, for the news had spread that a second-year medical student had developed a method for abolishing surgical pain. The patient, Gilbert Abbott, was brought in and Dr. Warren, the surgeon, waited in formal morning clothes. Operating gowns, masks, gloves, surgical asepsis, and the bacterial origin of infection were entirely unknown at that time. Everyone was ready and waiting, including the strong men to hold down the struggling patient, but Morton did not appear. Fifteen minutes passed, and the surgeon, becoming impatient, took his scalpel and turning to the gallery said, "As Dr. Morton has not arrived, I presume he is otherwise engaged." While the audience smiled and the patient cringed, the surgeon turned to make his incision. Just then Morton entered, his tardiness being due to the necessity for completing an apparatus with which to administer the ether. Warren stepped back, and pointing to the man strapped to the operating table said, "Well, sir, your patient is ready." Surrounded by a silent and unsympathetic audience, Morton went quietly to work. After a few minutes of ether inhalation, the patient was unconscious, whereupon Morton looked up and said, "Dr. Warren, your patient is ready." The operation was begun. The patient showed no sign of pain, yet he was alive and breathing. The strong men were not needed. When the operation was completed, Dr. Warren turned to the astonished audience and made the famous statement, "Gentlemen, this is no humbug." Dr. Henry J. Bigelow, an eminent surgeon attending the demonstration, remarked, "I have seen something today that will go around the world." Following initial disbelief, news of the successful demonstration spread rapidly. Within a month, ether was in use in other cities of the United States and had been given in Great Britain as well. Its use soon was established as legitimate medical therapy. The lives of those involved in the introduction of surgical anesthesia did not have so salubrious an outcome. Morton initially tried to patent the use of ether to produce anesthesia and, when this failed, patented instead his device for its administration. Considerable wrangling ensued as to who was the legitimate discoverer of anesthesia. Never receiving what he felt to be his due, Morton died an embittered man. Charles Jackson, Morton's chemistry teacher at Harvard, also claimed priority in the discovery; it was he who had suggested that Morton use pure sulfuric ether. Jackson became insane, a fate that

also befell Horace Wells, the man who had failed in the public demonstration of nitrous oxide anesthesia. Crawford Long, a physician in rural Georgia, had used ether anesthesia since 1842 but neglected to publish his experiences. He survived and prospered, but Morton rightfully receives credit for the introduction of surgical anesthesia. A monument erected by the citizens of Boston over the grave of Dr. Morton, in Mt. Auburn Cemetery near Boston, bears the following inscription, written by Dr. Jacob Bigelow:
WILLIAM T. G. MORTON

Inventor and Revealer of Anaesthetic Inhalation. Before Whom, in All Time, Surgery Was Agony. By Whom Pain in Surgery Was Averted and Annulled. Since Whom Science Has Control of Pain. Anesthesia after 1846 Although it is rarely used today, ether was the ideal "first" anesthetic. Chemically, it is readily made in pure form. It is relatively easy to administer, since it is a liquid at room temperature but is readily vaporized. Ether is potent, unlike nitrous oxide, and thus a few volumes percent can produce anesthesia without diluting the oxygen in room air to hypoxic levels. It supports both respiration and circulation, crucial properties at a time when human physiology was not understood well enough for assisted respiration and circulation to be possible. And ether is not toxic to vital organs. The next anesthetic to receive wide use was chloroform. Introduced by the Scottish obstetrician James Simpson in 1847, it became quite popular, perhaps because of its more pleasant odor. Other than this and its nonflammability, there was little to recommend it (Sykes, 1960). The drug is a hepatotoxin and a severe cardiovascular depressant. Despite the relatively high incidence of intraoperative and postoperative death associated with the use of chloroform, it was championed, especially in Great Britain, for nearly 100 years. Because of the danger and difficulty in administering chloroform, distinguished British physicians became interested in anesthetics and their administration, a trend that was not evident in the United States until 100 years later. The course of anesthesiology in the United States, after the initial burst of enthusiasm, was one of slow change and limited progress. Furthermore, despite the relative comfort that the surgical patient experienced, the amount and scope of surgery increased only slightly during the 1840s and 1850s (Greene, 1979). The incidence of mortality was little changed, for postoperative infection was still a serious problem. Only with the introduction of aseptic techniques 20 years after the discovery of anesthesia did surgery come into its own. Other Anesthetic Agents Nitrous oxide fell into disuse after the apparent failure in Boston in 1845. It was reintroduced in 1863 into American dental and surgical practice, largely through the efforts of Gardner Q. Colton, a showman, entrepreneur, and partially trained physician. In 1868, the administration of nitrous oxide with oxygen was described by Edmond Andrews, a Chicago surgeon, and soon thereafter the two gases became available in steel cylinders, greatly increasing their practicality. Nitrous oxide still is used widely today.

The anesthetic properties of cyclopropane were accidentally discovered in 1929, when chemists were analyzing impurities in an isomer, propylene. After extensive clinical trial at the University of Wisconsin, the drug was introduced into practice; cyclopropane was perhaps the most widely used general anesthetic for the next 30 years. However, with the increasing risk of explosion in the operating room brought about by the use of electronic equipment, the need for a safe, nonflammable anesthetic increased, and several groups pursued the search. Efforts by the British Research Council and by chemists at Imperial Chemical Industries were rewarded by the development of halothane, a nonflammable anesthetic agent that was introduced into clinical practice in 1956; it revolutionized inhalational anesthesia. Most of the newer agents, which are halogenated hydrocarbons and ethers, are modeled after halothane. The skeletal muscle relaxants (neuromuscular blocking agents) also were discovered and their pharmacological properties demonstrated long before their introduction into clinical practice. Curare, in crude form, had long been used by South American Indians as a poison on their arrow tips (see Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia). Its first clinical use was in spastic disorders, where it could decrease muscle tone without compromising respiration excessively. It was then used to modify the violent muscle contractions associated with electroconvulsive therapy of psychiatric disorders. Finally, in the 1940s, anesthesiologists used curare to provide the muscular relaxation that previously could be obtained only with deep levels of general anesthesia. Over the next half-dozen years several synthetic substitutes were used clinically. It is difficult to overemphasize the importance of muscle relaxants in anesthetic practice. Their use permits adequate conditions for surgery with light levels of general anesthesia; cardiovascular depression is thus minimized, and the patient awakens promptly when the anesthetic is discontinued. Although the desirability of an intravenous anesthetic agent must have been apparent to physicians early in the twentieth century, the drugs at hand were few and unsatisfactory. The situation changed dramatically in 1935, when Lundy demonstrated the clinical usefulness of thiopental, a rapidly acting thiobarbiturate. It was originally considered useful as a sole anesthetic agent, but the doses required resulted in serious depression of the circulatory, respiratory, and nervous systems. Thiopental, however, has been enthusiastically accepted as an agent for the rapid induction of general anesthesia. Various combinations of intravenous drugs from several classes have been used recently as anesthetic agents, usually together with nitrous oxide. The administration of short-acting opioids by constant intravenous infusion (with little or no potent inhalational agent) is an exciting current development in the practice of anesthesia.

*Modern Anesthesiology
What Is Anesthesia? The answer to this question is both more complex and more elusive than generally appreciated. To guide the discussion we may first consider the basic goals of anesthesia, namely to create a reversible condition of comfort, quiescence, and physiological stability in a patient before, during, and after performance of a procedure that would otherwise be painful, frightening, or hazardous. This statement embodies concepts that have evolved with modern developments within the specialty of anesthesiology that were not necessarily envisioned by early workers.

After the public demonstration of diethyl ether in 1846, anesthesia was eagerly embraced by the general public and the medical profession even as complications associated with its use were noted with concern (Codman, 1917). For several decades, the dramatic increase in surgical procedures performed was tracked precisely by increases in the deaths and major morbidities attributed to the anesthesia (Sykes, 1960). Foremost among the complications were regurgitation and aspiration of stomach contents and cardiovascular collapse, now thought to be disturbances of heart rhythm resulting from an interaction between direct effects of the agents that were used with the physiological response to surgical stress. To realize fully the benefits and the promise of anesthesia, laboratory and clinical researchers have investigated the pharmacological and physiological actions of potent new therapeutic agents, guided development of monitoring equipment and drug-delivery devices, and created advanced techniques and principles of practice (Wiklund and Rosenbaum, 1997). Recent refinements have included progressive attention to issues of risk assessment and risk reduction. Unlike the practice of every other branch of medicine, anesthesia usually is considered to be neither therapeutic nor diagnostic. The notable exceptions to this, including treatments of status asthmaticus with halothane and intractable angina with epidural local anesthetics (and other examples), should not obscure the critical point, which permeates the training and practice of the specialty. Patients present for surgery with an array of medical conditions both known and unknown, while ingesting drugs that alter cardiovascular and other responses. They will then undergo a series of physiological stressors from which they must be protected, including effects of the very agents used to initiate and sustain the anesthetic condition. Reduction of complications may be separated for illustrative purposes into three categories: 1. Minimizing the potentially deleterious direct and indirect effects of anesthetic agents and techniques, including perturbations of cardiac rhythm and contractility, alterations of vascular tone, blunting of protective reflexes, and changes in metabolic rate and thermoregulation. 2. Sustaining homeostasis during surgical procedures that involve major blood loss, tissue ischemia, reperfusion of ischemic tissue, fluid shifts, exposure to cold environment, and impaired coagulation. 3. Improving postoperative outcomes by choosing techniques that block or treat components of the surgical stress response, which would otherwise lead to short- or long-term sequalae. The Surgical Stress Response The stress of surgery includes (presumably) adaptive responses involving three systems, the hypothalamic-pituitary-adrenal axis, the sympathetic nervous system, and the acute-phase response, all of which may be activated by psychological stress, tissue injury, intravascular volume changes, anesthetic agents, pain, and organ manipulation (Udelsman and Holbrook, 1994). These stimuli trigger a cascade of neurohumeral responses, including increases in cortisol, catecholamines, heat shock proteins, and cytokines which, in turn, provoke tachycardia, hypertension, increased metabolism, hypercoagulability, and decreased immune function (Breslow, 1998). Specific associated morbidities include myocardial ischemia and infarction (Mangano et al., 1996), arrhythmias (Balser et al., 1998), thrombosis, infection, and delayed wound healing. The effects of anesthesia attenuate some components of the surgical stress response. In addition to promoting stability within the clinical milieu described above, it should be emphasized that the kinetics of anesthetic agents and the techniques used must conform to certain time constraints so that the duration and depth of anesthetic states parallel the tempo of the surgical procedure. Hence the uptake, distribution, and elimination of anesthetic drugs are important matters,

and the discovery of agents with rapid onset and elimination has greatly improved this aspect of care. The rest of this chapter will be organized around discussions of the functionally separable time periods: before (preoperative), during (intraoperative), and after (postoperative) surgery, illustrating within each period the principles of perioperative medical care and the anesthesiaspecific issues as they logically appear.

*Preoperative Period
Anesthetic considerations prior to surgery include patient evaluation and the administration of medications that treat chronic or acute disease and that facilitate the impending anesthetic experience. Preexisting comorbidities are important determinants of perioperative risk. Risk prediction indices or algorithms have been developed (Goldman et al., 1977; Palda and Detsky, 1997) that incorporate several pathophysiological conditions, including known or probable coronary artery disease; electrocardiogram (ECG) changes; signs and symptoms of congestive heart failure; abnormalities indicating pulmonary, renal, or hepatic disease; patient age; and invasiveness of the planned surgical procedure. Each one of these conditions has one or more treatment options that have been developed to neutralize its effect and prevent it from worsening during or after the procedure. This is a major feature of the practice of anesthesiology. Decisions are made with regard to the techniques employed, agents chosen, and monitors used based on the preoperative information (Sweitzer, 2000). Depending on the patient's condition, interventions suggested by the preoperative evaluation range from preoperative coronary angiography (with balloon angioplasty or coronary artery bypass grafting in appropriate cases) (Eagle et al., 1996), optimization of cardiac loading conditions guided by data from pulmonary artery catheters prior to surgery (Berlauk et al., 1991), simple corrections of electrolyte and hemoglobin abnormalities, and institution of antihypertensive therapy. Novel proposals for preoperative assessment are emerging from modern techniques of molecular biology. Genetic polymorphisms, the discovery of which has been accelerated as a consequence of the mapping of the human genome, are being linked to medical conditions (hypertension, coagulation disorders, arrhythmias) and variable responses to therapy. The challenge now is to apply these same concepts to the surgical environment. Preoperative evaluation and risk assessment may evolve to include broad screenings for polymorphism associated with morbidity and, thereby, guide risk-reduction therapies. Preoperative Medication Chronic Medications Preoperative medication begins with virtually all of the patient's normal daily morning doses of significant drugs. This includes inotropic, chronotropic, dromotropic, and vasoactive agents, especially antihypertensive agents. Diuretics are controversial, as are metformin and monoamine oxidase inhibitors. The latter agents have serious interactions with meperidine and other drugs used during surgery, but these interactions can be managed. The management of insulin-dependent diabetes and chronic steroid use is addressed formally by protocols. Patients dependent on drugs

that are associated with withdrawal symptoms must be given special treatment. The importance of maintaining cardiovascular medications is illustrated by clinical studies showing that the incidence and severity of myocardial ischemia is associated with elevated heart rates in the postoperative period. This finding led to clinical trials of prophylactic, perioperative administration of -adrenergic receptor antagonists in high-risk patients. Preoperative and postoperative administration of the -receptor antagonist atenolol yielded significant reduction of myocardial ischemia and a reduction in mortality (at two years) in the treatment group (Mangano et al., 1996). This has been confirmed in a study of high-risk patients given bisoprolol who had significantly lower rates of myocardial infarction and death than did control patients (Poldermans et al., 1999). Previous studies of prophylactic nitroglycerin and calcium channel blockers had failed to show a benefit. Other preoperative medications are used to treat conditions directly related to anesthetic issues that may arise before, during, and after surgery. Anticholinergic Drugs Though previously widely employed for their vagolytic and membrane-drying properties, anticholinergic agents (see Chapter 7: Muscarinic Receptor Agonists and Antagonists) are little used, preoperatively, in adults in modern practice, except in specific situations requiring reduced secretions. Vagotonia may occur intraoperatively from increases in ocular pressure, visceral traction, and other reasons, and it is treated by interrupting the stimulus temporarily while administering anticholinergic drugs. Drugs That Reduce the Acidity and Volume of Gastric Contents The induction of general anesthesia eliminates the patient's ability to protect the airway should regurgitation of stomach contents occur. This is why nothing-by-mouth ("npo") status is emphasized so strongly for patients having elective procedures. Decreasing the volume of gastric contents further reduces the likelihood of regurgitation, and increasing the gastric pH above 2.5 reduces damage to the lungs in the event of aspiration. Histamine H2-receptor antagonists, antacids, and prokinetic agents (see Chapters 37 and 38) frequently are administered to achieve these conditions. Sedative-Hypnotics and Antianxiety Agents Drugs such as benzodiazepines and butyrophenones (see Chapter 17: Hypnotics and Sedatives) are useful when administered before surgery both for patient comfort and for facilitation of the anesthetic state. When given in conjunction with opioids, there is reduction of catecholamine release in response to surgical stimuli (Newman and Reves, 1993). Opioids Opioids (see Chapters 14: General Anesthetics and 23: Opioid Analgesics) may be used preoperatively in small doses to act synergistically with sedatives in creating a tranquil patient. Only in persons actually having pain or experiencing incipient withdrawal symptoms are they specifically indicated before surgery.

*Intraoperative Period
Monitoring Standard monitoring (Pierce, 1989) during anesthesia includes continuous electrocardiography, monitoring of heart rate and body temperature, pulse oximetry, and capnography (the measurement of carbon dioxide concentration in exhaled gas) and frequent noninvasive blood pressure measurement. Additional parameters measured may include urine output, blood loss, and ventilation-related parametersincluding inspired oxygen, tidal volume, minute ventilation, peak inspired airway pressure, and all gas flows. Direct measurement of inspired and expired levels of volatile anesthesic agents is desirable. In selected cases, invasive measurements are made of arterial pressure, central venous pressure, pulmonary artery pressure, cardiac output, pulmonary capillary wedge pressure, right ventricular ejection fraction, and pulmonary artery oxygen saturation. Transesophageal echocardiography has proven to be most useful in cardiac surgery and in other special situations. General Anesthesia There are two fundamentally different ways of achieving the basic anesthetic conditions required to perform surgical procedures, general anesthesia and regional (or conduction) anesthesia. The hallmark of general anesthesia is loss of consciousness as represented by the historical vignette and the description "going to sleep," which continues to be used by lay persons and professionals alike. Regional anesthesia is effected by the injection or infiltration of certain amides or esters that block signal conduction (usually voltage-gated sodium channels) near nerves either peripherally or more centrally (see Chapter 15: Local Anesthetics). This widely used technique has advantages (including intense attenuation of noxious stimuli) as well as drawbacks, as discussed in a later section of this chapter. General anesthesia is classically described by four qualities: hypnosis (usually meaning sleep or loss of consciousness), amnesia, analgesia, and muscle relaxation. To these must be added the broader concepts of maintaining physiological stability, attenuation of the surgical stress response, and a host of techniques to lessen the aforementioned categories of risk. The intraoperative period for general anesthesia is normally broken down into three phases induction, maintenance, and emergenceeach with its special considerations. Induction The "induction" of general anesthesia occurs when a conscious or otherwise responsive being is rendered unconscious by the effects on the nervous system of inhaled or intravenously injected agents. Loss of Consciousness Oddly, after 150 years of investigation, researchers still do not know with certainty either the molecular mechanisms whereby general anesthetic agents exert their neurological effect or the brain structures or circuitry involved in the loss of consciousness. It may be disconcerting, but a corollary of this lack of understanding is that assessment of the "depth" of anesthesia must be determined by indirect means (i.e., changes in vital signs) that are only variably reliable. The variety of structurally diverse molecules that can create the condition we call general anesthesia

is astonishing (see Chapter 14: General Anesthetics). The group includes volatile organic agents (halogenated hydrocarbons, diethyl ether, chloroform); inorganic gases such as nitrous oxide and xenon; alcohols; and an array of intravenous agents, including barbiturates, etomidate, propofol, and ketamine. Exactly how and precisely where anesthetic agents produce their remarkable effects have been under investigation for a century (Meyer, 1899, 1901; Overton, 1901). At the cellular level, fundamental discoveries in the last decade have greatly changed traditional concepts that attributed anesthetic action to nonspecific membrane solubility of anesthetic molecules with resulting perturbed structural and dynamic properties of the lipid membrane. Recent work has identified functional targets for a wide range of intravenous and inhalational anesthetic molecules. These primarily include ligand-gated ion channels, such as the gamma-aminobutyric acid type A (GABAA ), glycine, 5-HT3 serotonin, nicotinic acetylcholine (ACh), and subtypes of glutamate receptors (NMDA, AMPA, and kainate) (see Chapters 12: Neurotransmission and the Central Nervous System and 14: General Anesthetics). GABA A and glutamate receptors are found throughout the brain, while ACh and serotonin receptors are associated with specific pathways of interconnecting nuclei. Identifying the location of these receptors in the central nervous system (CNS), the function of pathways that incorporate them, and the behavioral and physiological changes induced by their interaction with anesthetic molecules are some of the fundamental challenges to modern research. Any discussion of loss of consciousness immediately begs the question: What is consciousness? Descriptions include the qualities of perception, attention, volition, self-awareness, and memory. Purposeful movement and response to auditory, tactile, or noxious stimulation classically have suggested consciousness, but the role of spinal cord reflexes complicates this idea. Consciousness has been dubbed a "prescientific" concept (Kulli and Koch, 1991), but it is enjoying a resurgence of attention from a range of investigators from philosophers to molecular biologists (Crick and Koch, 1998; Chalmers, 1996). Neural Correlates of Consciousness Two components of conscious awareness have been proposed: "arousal-access-vigilance" and "mental experience-selective attention" (Block, 1996). Crick and Koch (1995) postulate that some identifiable, active neuronal processes in the brain are associated with states of awareness. The quest is to determine what is special, if anything, about their connections and manner of activation. Such circuitry would be called the "neural correlates of consciousness" (Crick and Koch, 1998), and work is in progress to establish the concept's validity using in vivo imaging, single neuronal types, and intracellular components. Several observations suggest that neural pathways mediated by ACh control both the content of conscious awareness and its level of intensity (Perry et al., 1999). Mental disturbances seen with degenerative brain diseases include fluctuating levels of conscious awareness and are associated with deficits in neocortical ACh systems (Perry and Perry, 1995). The cholinergic system is distributed in various nuclei (Figure 131), including two major groupsthe basal forebrain and pedunculopontine nuclei with extensive bidirectional connections to the cortex and thalamus. These are considered to be essential for controlling selective attention (Bentivoglio and Steriade, 1990). The extent of cholinergic projections from the nucleus basalis to the human cortex suggests a major role in regulatory modulation. Continuous firing during rapid-eyemovement (REM) sleep is sufficient to activate the cortex (Perry et al., 1999). The phenomenon of brain stem activation of cortical processes (accepted as necessary for consciousness as defined in higher animals) has received attention, because the midbrain reticular formation (MRF) has neural projections from brainstem to thalamic nuclei to cortical structures, and the main neurotransmitters (ACh and glutamate) have been described (Steriade, 1996). Note that either ACh receptors or glutamate receptors or both are inhibited by a wide range of anesthetic agents including the volatile

agents, barbiturates, and ketamine (Krasowski and Harrison, 1999). Figure 131. Cholinergic Systems in the Human Brain. Two major pathways project widely to different brain areas: basal-forebrain cholinergic neurons (blue) [including the nucleus basalis (nb) and medial septal nucleus (ms)] and pedunculopontinelateral dorsal tegmental neurons (gray). Other cholinergic neurons include striatal interneurons, cranial nerve nuclei, vestibular nuclei, and spinal cord preganglionic and motoneurons. (Modified from Perry et al., 1999, with permission.)

In human beings engaging in tasks that require alertness and attention, there is increased blood flow in the MRF (Kinomura et al., 1996). Stimulation of the MRF in anesthetized animals causes changes in the cortical EEG to resemble the awake state. Finally, Shimoji et al. (1984) have shown that the excitatory responses of MRF neurons, evoked by somatosensory stimulation in cats, are suppressed by anesthetic agents from several classes, while inhibitory responses of the MRF neurons are potentiated by barbiturates and ether. It has been proposed that awareness uses a serial attention mechanism consisting of high-frequency (40-Hz), synchronized oscillations that transiently "bind together" widely distributed cortical neurons related to different aspects of a perceived object (color, size, motion, sound, etc.) (Crick and Koch, 1990; Steriade et al., 1996). Direct application of ACh induces just such fast synchronized activity in hippocampal slice preparation (Fisahn et al., 1998). Again, ACh receptors are inhibited by halogenated agents, barbiturates, and ketamine (Perry et al., 1999). It has been suggested that the thalamus is the most likely source of this oscillatory activity because of its extensive bidirectional connections with higher and lower structures. It seems likely that the action of ACh in the cortex and thalamus is central to the normal maintenance of conscious awareness, as are interactions among ACh, GABA, and glutamate, all three of which control the cholinergic neurons in the basal forebrain and pedunculopontine projections.

Other pathway candidates for neural correlates of consciousness include noradrenergic projections from pontine locus ceruleus nuclei, which distribute axons cephalad to the dorsal thalamus, hypothalamus, cerebellum, forebrain, and neocortex. Not only does this system contain 50% of all noradrenergic cells in the brainstem, but changes in concentrations of norepinephrine alter anesthetic dose requirements (Angel, 1993). 2-Adrenergic receptor agonists increase the depth of anesthesia, whereas 2-receptor antagonists increase the amount of anesthesia required (Angel et al., 1986). Obviously, these findings support a role for noradrenergic mechanisms contributing to consciousness. Hemodynamic Effects The physiological effects of anesthesia induction associated with the majority of both intravenous and inhalational agents include most prominently a decrease in systemic arterial blood pressure. The cause is either direct vasodilation or myocardial depression or both, a blunting of baroreceptor control, and a generalized decrease in central sympathetic tone (Sellgren et al., 1990). Agents vary in the magnitude of their specific effects (see Chapter 14: General Anesthetics), but in all cases the hypotensive response is enhanced in the face of underlying volume depletion, intrinsic depressed myocardial function, and cardiovascular medications. Even anesthetics that show minimal hypotensive tendencies under normal conditions (etomidate, ketamine) must be used with caution in trauma victims, in whom intravascular volume depletion is being compensated by intense sympathetic discharge. Smaller than normal induction dosages are employed in patients presumed to be sensitive to hemodynamic effects of anesthetics (e.g., elderly or debilitated patients, those with systolic and diastolic dysfunction, those taking diuretics or who have had recent dye studies or preparation for bowel surgery). Administration of direct- and indirect-acting sympathomimetics (see Chapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists) will contribute to stability. Also, it is common to administer intravenous fluids liberally prior to and during induction to avoid hypotension. In some cases, fluid is relatively contraindicated, requiring the use of inotropic agents and/or vasoconstrictors to support the circulation. Airway Maintenance Airway maintenance is essential following induction. Ventilation must be assisted or controlled for at least some period and perhaps throughout surgery. The gag reflex is lost, and the stimulus to cough is blunted. Lower esophageal sphincter tone is reduced. Both passive and active regurgitation may occur. Endotracheal intubation was introduced in the early 1900s (Kuhn, 1901) and was a major reason for a decline in the number of aspiration deaths. Muscle relaxation is valuable during the induction of general anesthesia where it facilitates management of the airway including endotracheal intubation. Neuromuscular blocking agents are commonly used to effect such relaxation (see Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia). Endotracheal intubation both prevents aspiration and permits control of ventilation. While the procedure is used broadly, there also are alternative procedures. In patients who have had nothing to eat and are without symptoms of reflux, maintenance of ventilation (usually assisted-spontaneous) with an externally applied mask has been common for certain procedures not requiring muscle relaxation. It is important to note that the combination of direct laryngoscopy and intubation are stimuli fully comparable to an abdominal incision. Instrumentation of the subglottic airway stimulates secretions and exacerbates bronchospastic reactions as well, so when feasible, it might be desirable to avoid the procedure. An instrument called the laryngeal mask airway (Brain, 1983) has been progressively employed. This device consists of a flexible oval fenestrated diaphragm that is

inserted blindly into the oropharynx. When seated, the diaphragm covers the laryngeal opening and can be sealed by inflation of a balloon around its circumference. Use of the laryngeal mask is becoming very popular; more than half of the anesthetics administered in Great Britain are thought to involve its use. There is controversy about its employment during controlled ventilation and in patients with symptoms of gastric reflux, since it does not provide complete airway protection. Stabilizing the Anesthetic State Following induction, continued management of the patient may be associated with fluctuations in blood pressure under the competing influences of anesthetic-induced depression and surgical stimulation. Part of the art and science of administering anesthetics is learning to manage the process smoothly, matching metabolic demands with appropriate oxygen supply while ensuring unconsciousness in preparation for the impending surgical stimulation, which will continue (not necessarily uniformly) throughout the case. Assessing the patient's level of consciousness, assuring adequate depth of anesthesia, and minimizing recall obviously are central to the goals of general anesthesia. Signs and Stages of Anesthesia Between 1847 and 1858, John Snow described certain signs that helped him determine the depth of anesthesia in patients receiving chloroform or ether. In 1920, Guedel, using these and other signs, outlined four stages of general anesthesia, dividing the third stagesurgical anesthesiainto four planes. The somewhat arbitrary division is as follows: I, stage of analgesia; II, stage of delirium; III, stage of surgical anesthesia; IV, stage of medullary depression. Although the classical signs and stages of anesthesia are partly recognizable during administration of volatile anesthetics, they are most often obscured by modern anesthetic techniques. Intravenous induction agents (thiopental, etomidate, propofol) produce a deep plane of anesthesia virtually within one circulation time, while certain properties of new inhalational agentssuch as low blood solubility (desflurane) and minimal airway irritability (sevoflurane)allow for such rapid establishment of anesthesia that the transition to unconsciousness is almost immediate. Furthermore, Cullen and coworkers (1972) demonstrated that no single one of the major signs described by Guedel correlated satisfactorily with the measured alveolar concentrations of anesthetic during prolonged, stable states. Thus, only the term stage two remains in common use today, signifying a state of delirium in the partially anesthetized patient most frequently seen during emergence from anesthesia where volatile inhalational agents have been used. Maintenance The maintenance phase of general anesthesia is associated with changes in intensity of stimulation, fluid shifts (third spacing), blood loss, acid-base disturbances, hypothermia, coagulopathies, and other conditions. Of course, in many cases none of these things occurs, but special measurements, monitors, and precautions are necessary when they door in their anticipation. Management of the anesthetic interplays constantly with the general physiology of the patient. Historically, and continuing to the present, the great majority of cases involves the administration of one or more of the anesthetic gases during the maintenance phase. Special factors govern the transport of anesthetic molecules from inspired gas through the lungs to blood and then to the brain, including (1) concentration of the anesthetic agent in inspired gas, (2) pulmonary ventilation delivering the anesthetic to the lungs, (3) transfer of the gas from the alveoli to the blood flowing through the lungs, and (4) loss of the agent from the arterial blood to all the tissues of the body. Obviously, the concentration in neural tissues is of greatest importance. The details of the uptake and distribution

of anesthetic agents are covered in Chapter 14: General Anesthetics. With full cognizance of the factors related to the delivery of anesthetic gases to the brain, there remains the need to characterize and quantify the relative potencies of volatile anesthetic agents in a practical way. The Minimum Alveolar Concentration (MAC) Since 1965, the relative potencies of volatile anesthetic agents (halothane, enflurane, isoflurane, etc.) and N2O and xenon have been described by the concentration (minimum alveolar concentration or MAC) that renders immobile 50% of subjects exposed to a strong noxious stimulation (1.0 MAC) (Eger et al., 1965), such as surgical incision. Lack of movement in response to incision has been assumed to imply unconsciousness and amnesia in an unparalyzed patient. A major strength of this concept stems from the facts that the concentration of anesthetic gases can be measured and displayed in each breath and that the end-tidal expired partial pressures approximate the brain concentration. The latter assumption fails during periods of rapid change. Also useful is the fact that doses of different agents expressed as MAC equivalents appear to be additive (Cullen et al., 1969; Miller et al., 1969). In human beings exposed to modern inhalational agents, mild analgesia begins at about 0.3 MAC; amnesia is present at 0.5 MAC, where the patient can respond to command or even speak but does not recall this later (Levy, 1986). Obtundation deepens with 1.0 MAC where (by definition) 50% of patients remain immobile after stimulation. At higher doses (about 1.3 MAC), the sympathetically mediated response to surgery is blunted (Roizen et al., 1981). Doses of inhalational agents higher than 2.0 MAC (equilibrated) are said to be potentially lethal, but in fact, such MAC multiples using balanced combinations of inhalational and intravenous agents are commonly achieved and sustained without untoward effect. Pharmacological support of the circulation may be required. Obviously, a similar concept exists for intravenous anesthetics (barbiturates, propofol, etomidate) and adjuvants, but there is currently no on-line, real-time measurement of blood drug concentration for these compounds. The clinician must rely on body weight and age-adjusted dose guidelines to approximate the target blood levels, and then adjust the delivery rates according to various physiological responses, notably changes in blood pressure and heart rate. Knowledge of the MAC fraction or multiple does not necessarily convey all of the necessary information. The anesthesiologist needs to assess both the level of responsiveness that exists at a point in time (with the level of stimulation existing at that moment) and the likelihood that the patient will react to an anticipated increase in the stimulation (such as laryngoscopy, incision, use of a retractor). As surgery proceeds, continuous adjustments in the delivery rates of both inhalational and intravenous agents are required in an attempt to ensure unconsciousness, amnesia, immobility, and analgesia while simultaneously attending to the drifting physiological conditions. Limitation of MAC It is important to note that the concept of MAC leaves 50% of patients who actually move with stimulation and who thereby fail one measure of lack of awareness. While the dose-response curve is steep with 99% of subjects immobile at 1.3 MAC, the possibility of awareness and recall still may exist. Moreover, movement itself is of no use in the large number of patients who receive muscle relaxants. Other indicators of awareness that are independent of muscle relaxation include lacrimation, diaphoresis, and pupillary dilation. These signs are highly suggestive if they are

present, but their absence is not definitive. While the absence of movement does not ensure unconsciousness, neither does its presence necessarily imply consciousness. Elegant experiments with laboratory animals using EEG and MRF recordings that split the circulation between the brain and torso show that 1.0 MAC of isoflurane delivered to both circulations largely suppressed both EEG and MRF responses to noxious stimuli delivered to the torso. However, there were marked effects when the torso concentration was lowered to 0.3 MAC while the brain remained at 1.0 MAC (Antognini et al., 2000). The animals moved with torso stimulation although the brain remained unconscious by EEG criteria. It has been appreciated for some time that spinal cord effects are important in general anesthesia (Kendig, 1993). Indeed, subarachnoid injection of local anesthetics lowers the dose of sedative required to achieve a hypnotic response (Ben-David et al., 1995). These observations explain the long-noted clinical experience that a patient who moves with incision is not necessarily "awake" and one who does not move is not necessarily unconscious or amnesic. The experimental validation of this phenomenon clearly calls into question the applicability of MAC as classically defined and sets the stage for new efforts to better assess brain states during anesthesia. Amnesia Memory processes associated with anesthesia and surgery are complex (Bailey and Jones, 1997). Both explicit (free recall) memory and implicit (subconscious) phenomena are described. The latter may be identified by tests such as category generation, free association, and forced choice recognition. Recent large studies have suggested that the incidence of explicit recall is 0.15% following general anesthesia (0.18% when using muscle relaxants, 0.10% without muscle relaxants) in patients undergoing surgery and anesthesia when interviewed three times following the case (Sandin et al., 2000). Interestingly, the incidence of recall was unaffected by the use of preoperative benzodiazepines. Since many millions of anesthetic procedures are performed each year, thousands of people actually will experience intraoperative awareness. Further, it is well established (Schwender et al., 1998) that delayed neurotic symptoms (posttraumatic stress disorder) can follow awareness during general anesthesia. Monitoring Consciousness The search for a monitor of level of consciousness or anesthetic depth obviously has centered on electroencephalography (and evoked potentials). Whereas the processed EEG power spectrum median frequency falls from about 10 Hz in the awake state to 5 Hz or less in both natural sleep and anesthesia in the absence of verbal stimuli, it does so only for some agents, and it can be altered further by hypoxia, hypocarbia, hypothermia, and other common conditions during surgery (Jessop and Jones, 1992). The volatile agents, as previously noted, elicit an excitement phase registered by the EEG as high frequency and high power transiently, as the subject passes through the planes of anesthetic depth. The EEG has been deemed unreliable as a measure of anesthetic dose or as a predictor of awareness or recall (Levy, 1986). Recent developments have changed that impression. Highly processed EEG signals have evolved from first order (signal amplitude mean and variance) to second order (power spectrum) and now to higher order statistics. The latter include the bispectrum and trispectrum (third- and fourth-order statistics, respectively) (Rampil, 1998). Special attention is focused on the bispectrum, which measures the correlation between phase and frequency components. Four derived subparameters have been defined and combined through weighting factors, determined empirically to produce a dimensionless number, called the bispectral index or BIS (Aspect Medical Systems, Inc., Natick, MA), which varies from 0 to 100. The

proprietary algorithms, which give rise to the BIS value, have evolved by incorporating data from thousands of patients undergoing anesthesia with agents from different classes. A monitor receives signals from electrodes placed on the forehead and displays the BIS value continuously. Table 131 shows the experimentally derived correlation between absolute value and effect. Figure 132 shows the hypnotic level (BIS) versus time in a volunteer receiving a propofol infusion demonstrating a direct relationship between blood level of the hypnotic drug and level of consciousness. This technology has become available at a time of growing international concerns regarding intraoperative awareness (Ghoneim, 2000). It must be emphasized that the ability of the device to assure lack of consciousness or recall has not been established, but recall below a reading of 60 apparently has not been reported. Figure 132. Hypnotic State and Sedative Concentration. The figure shows a continuous read-out of hypnotic state assessed by bispectral index (BIS) monitoring as propofol blood level is varied. (From Rosow and Manberg, 1998, with permission.)

Analgesia Although inhalational anesthetic agents have an analgesic component in that the response to noxious impulses are blunted, this is mild at low doses and is only effective for surgery at higher concentrations (1.3 MAC or greater) where other side effects may be limiting. Most general anesthetics employ some dose of an opioid to provide analgesia. Opioids Opioids have been used for centuries for their analgesic properties. The identification of opioid receptors in the spinal cord (Kitahata et al., 1974) and brainstem, and the manufacture of synthetic opioids of great potency (fentanyl, alfentanil, sufentanil) have transformed the practice of anesthesia over the past 30 years. The pharmacology of opioids is covered in Chapters 14: General Anesthetics and 23: Opioid Analgesics.

Opioids are synergistic with sedative-hypnotic agents and inhalational agents, including nitrous oxide. The ability of opioids to block painful stimuli, accompanied by intrinsic hemodynamic stability, has led to so called "high-dose" techniques, notably for cardiac surgery (a maximal stimulus). In this approach, opioids are combined only with an amnesic agent such as midazolam (Curran, 1986), since the opioid doses that cause autonomic stability (and lack of movement) will not reliably cause loss of consciousness or amnesia (Ausems et al., 1983). More common is the use of lower doses of opioids administered continuously or intermittently during surgery in conjunction with a volatile general anesthetic, the latter given in fractions of MAC (0.5 to 0.8), and nitrous oxide, where not contraindicated. This combination, called "balanced anesthesia" by some, permits sustaining unconsciousness (presumably) with the volatile anesthetic while supplying analgesia with opiates for the duration of surgery and into the postoperative period. Return to responsiveness at surgery's end can be prompt. The introduction of remifentanil, an ester opioid metabolized by plasma esterases, has created a new dimension for intensity of analgesia during surgery and for rapid awakening during recovery. This agent is not only intrinsically very potent, but it has a very short half-life, allowing for higher infusion rates during the intense periods of stimulation (Brkle et al., 1996). Contribution of Analgesia to the Hypnotic State To separate the influences of analgesia and hypnosis on anesthetic requirements in conditions of variable stimulation, we consider a study of patients who received an infusion of the hypnotic propofol in a dose predicted to induce marked loss of consciousness (blood level 4 g/ml) (Guignard et al., 2000). Following equilibration and determination of hypnotic state (by BIS monitoring), the opioid remifentanil was administered in five graded doses to achieve blood levels of 0 (placebo), 2, 4, 8, and 16 ng/ml. At steady state, an intense noxious stimulus (laryngoscopy) was applied. Figure 133 shows the level of consciousness at each step in aggregate. Note that (1) propofol alone produced a level of deep hypnosis (BIS = 50); (2) the addition of the opioid did not deepen the hypnotic state; (3) the effect of the stimulus on arousal varied with opioid blood level, with the lowest level (0) leading to a marked rise in the BIS value, while the highest opioid concentration completely removed the tendency to arousal, leaving the hypnotic state unchanged. Figure 133. Level of Consciousness [Assessed by Bispectral Index (Bis) Monitoring] Following Sedation, Analgesia, and Stimulation. The numerals on the right side of the figure represent the blood levels of remifentanil (ng/ml) after four different doses or placebo (0). The stimulus was laryngoscopy. See text for details. (Adapted from Guignard et al., 2000, with permission.)

The methodology in this study illustrates an alternative technique for managing general anesthesia with only intravenous agents. Called TIVA, for total intravenous anesthesia, it frequently incorporates an amnesic drug such as midazolam to ensure lack of recall (Newman and Reves, 1993) and a muscle relaxant in addition to analgesic and hypnotic agents. The ability to assess levels of consciousness objectively enhances the appeal of TIVA. The technique may increase in popularity, especially if the cost of the newer, shorter-acting agents can be justified. Muscle Relaxation The fourth quality of general anesthesia is muscle relaxation. This, at least, implies that patients should not move with incisionan achievable goal with sufficient doses of inhalational anesthetics, intravenous anesthetics, opioids, or some combination. However, following the need for brief muscle paralysis to achieve intubation, more prolonged relaxation is required for some orthopedic, general abdominal, and otolaryngology surgeries. Muscle relaxants are further discussed in Chapters 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia and 14: General Anesthetics. Their usual, ready reversal by cholinesterase inhibitors (e.g., neostigmine; see Chapter 8: Anticholinesterase Agents), along with development of drugs with a wide range of half-lives, has made use of muscle-relaxant drugs widespread in surgery. Emergence As surgical stimulation begins to lessen during wound closure, delivered doses of anesthetic agents will be reduced in a manner that reflects their specific pharmacokinetics. Both inhaled and intravenous drugs can exhibit delayed dissipation caused by either slow washout (from poorly perfused fat-rich tissue) or by the character of their distribution and metabolism. The major factors that affect rate of elimination of inhaled anesthetics are the same as those that are important in the uptake phase: pulmonary ventilation, blood flow, and solubility in blood and tissue. Because of the high blood flow to brain, the tension of anesthetic gas in the brain decreases rapidly, accounting for the rapid awakening from anesthesia noted with relatively insoluble agents such as nitrous oxide (see Chapter 14: General Anesthetics). The physiological changes accompanying emergence from general anesthesia can be profound. Hypertension and tachycardia are common as the sympathetic nervous system regains its tone and is enhanced by pain (Breslow, 1998). Myocardial ischemia can appear or markedly worsen during

emergence in patients with coronary artery disease. Emergence excitement occurs in 5% to 30% of patients and is characterized by tachycardia, restlessness, crying, moaning and thrashing, and various neurological signs (Eckenhoff et al., 1961). Postanesthesia shivering occurs frequently because of core hypothermia, which was common before modern realization of its negative effects. A small dose of meperidine (12.5 mg) lowers the shivering trigger temperature and effectively stops the activity. The incidence of all of these emergence phenomena is greatly reduced when opioids are employed as part of the intraoperative regimen. Hypothermia Patients develop hypothermia (body temperature <36C) during surgery for several reasons, including low ambient temperature (and exposed body cavities), cold intravenous fluids, altered thermoregulatory control, and reduced metabolic rate. General anesthetics lower the core temperature set point, at which thermoegulatory vasoconstriction is activated to defend against heat loss. Further, vasodilation caused by both general and regional anesthesia blocks the normal thermal constriction, thereby redistributing heat in the body mass and leading to a rapid decline in core temperature until the new (lower) set point is reached (Sessler, 2000). Total body oxygen consumption decreases with general anesthesia by about 30%, and thus heat generation is reduced. Even small drops in body temperatures may lead to an increase in perioperative morbidity, including cardiac complications (Frank et al., 1997), wound infections (Kurz et al., 1996), and blood loss. Prevention of hypothermia has emerged as a major goal of anesthetic care. Modalities to maintain normothermia include using warm intravenous fluids, heat exchangers in the anesthesia circuit, forced-warm-air covers, and new technology involving water-filled garments with microprocessor feedback control to a core temperature set point. Regional (Local) Anesthesia Local anesthetics include esthers (e.g., cocaine, procaine, tetracaine) and amides (e.g., lidocaine, bupivicaine, ropivicaine) that are injected in the vicinity of nerves to cause temporary, virtually complete interruption of neural traffic (see Chapter 15: Local Anesthetics), enabling surgery to proceed in comfort. Upper limb procedures may be facilitated by plexus blockade, while surgery on the thorax, abdomen, and lower extremities may be accomplished by neuraxial blockade (epidural and spinal), either in conjunction with general anesthesia or as the sole modality employed. Spinal anesthesia, first performed in 1889 (see Wulf, 1998) is effected by injection of local anesthetic agents into the lumbar (L34, L45) subarachnoid cerebrospinal fluid. Since the spinal cord rarely extends below L2, the injection needle harmlessly pushes aside the strands of the cauda equina. Depending on the volume of injectate (usually 1 to 3 ml), specific gravity (may be either hyper-, hypo-, or isobaric, varying with the diluent), manner of injection, and the position of the patient, spinal blockade may extend from T2 down through the sacral roots. Epidural anesthesia proceeds with injection of a volume (10 to 25 ml) of local anesthetic solution into the epidural "potential" space. Because dural puncture is not intended, the site of entry may be at any vertebral level permitting a band of "segmental" blockade approximately limited to the region of interest. Spinal and epidural anesthesia share many similarities and will be discussed together as "neuraxial" techniques with specific differences highlighted. The sympathetic nervous system is mediated through spinal segments T1L2, and neuraxial blockade will most commonly involve several of these. The drop in blood pressure that ensues is anticipated and compensated for, as necessary, with fluid administration and vasopressor agents. In

volume-depleted patients, aggressive prophylactic measures are taken. Arterial and venous dilation cause most of the pressure drop, but cardiac sympathetic nerves emerge from T1T4 and also may be blocked. This blockade of sympathetic nerves going to the heart is used to advantage in the treatment of myocardial ischemia refractory to conventional medical therapy by administering a thoracic epidural injection of a local anesthetic agent (Blomberg et al., 1989). Further, the decrease in afterload seen with all levels of neuraxial blockade can improve cardiac output in patients with congestive heart failure. To achieve neuraxial blockade over a prolonged period (more than 3 hours), a catheter is placed in either the subarachnoid or epidural space for either bolus injection or continuous infusion. This allows continuance of neural blockade into the postoperative period. As previously noted, regional anesthesia may have special advantages over general anesthesia, including attenuation of the surgical stress response. Intense blockade using regional anesthesia can hold intraoperative catecholamines to presurgical values (Breslow et al., 1993). There is evidence that successful blockade of components of the stress response can result in improved outcome. Hypercoagulability, seen postoperatively in patients having lower extremity vascular surgery under general anesthesia, is eliminated when stress ablation is achieved. Unfortunately, stimuli from upper abdominal and thoracic surgery are difficult to block completely with regional techniques. In these cases, direct suppression of sympathetic nervous system by 2-adrenergic receptor agonists or endorgan blockade ( -adrenergic receptor antagonists) may be required. Other benefits include shorter periods of ileus (Liu et al., 1995). The discovery of opioid receptors in the dorsal column of the spinal cord suggested the addition of a neuraxial opioid to induce analgesia. This practice is now common, and the combination of opioids and local anesthetic agents is used to advantage, especially in the management of postoperative pain. Unattenuated surgical stimuli cause sensitization of excitable spinal neurons, a phenomenon called neuroplasticity (King et al., 1988) or "wind-up." This condition produces long-lasting depolarization of posterior horn neurons and heightens the perception of pain. Wind-up can be prevented by intense blockade prior to the stimulus (preemptive analgesia) as with epidural or spinal anesthesia, or by adequately performed local infiltration of the proposed incision site. Complications of regional anesthesia include "high" spinal blockade, hypotension, headache, cardiac and vascular toxicity, neuropathies, and epidural hematoma.

*Postoperative Period
The postoperative period can be a turbulent experience for patients and health-care providers. In addition to the emergence phenomena listed above, other problems arise involving the airway, lungs, and cardiovascular system. Airway obstruction may occur because residual anesthesia effects continue partially to obtund consciousness and reflexes (especially seen among patients who normally snore or who have sleep apnea). Strong inspiratory efforts against a closed glottis can lead to negative-pressure pulmonary edema. Pulmonary functional residual capacity is reduced postoperatively following all types of anesthesia and surgery, and hypoxemia may occur. Hypertension can be prodigious and must be

treated with 1-adrenergic receptor antagonists, -adrenergic receptor antagonists, 2-adrenergic receptor antagonists, angiotensin converting enzyme inhibitors, calcium channel antagonists, or other intravenous antihypertensive agents. Pain control can be complicated in the immediate postoperative period, especially if opioids have not been part of a balanced anesthetic. Administration of opioids in the recovery room can be problematic among patients who still have a substantial residual anesthetic effect. Patients can alternate between screaming in apparent agony and being deeply somnolent with airway obstruction, all in a matter of moments. The nonsteroidal antiinflammatory agent ketorolac (30 to 60 mg intravenously) frequently is effective, and the development of cyclooxygenase-2 inhibitors (see Chapter 27: Analgesic-Antipyretic and Antiinflammatory Agents and Drugs Employed in the Treatment of Gout) holds promise for analgesia without respiratory depression. Regional anesthetic techniques are an important part of a perioperative "multimodal" approach that employs local anesthetic wound infiltration, epidural, spinal, and plexus blocks, nonsteroidal antiinflammatory drugs, opioids, 2-adrenergic receptor agonists, and NMDA-receptor antagonists (which prevent neuroplasticity) (Kehlet, 1997). Patient-controlled administration of intravenous and epidural analgesics makes use of small computerized pumps activated on demand but programmed with safety limits to prevent overdose. The agents used are opioids (frequently morphine) by the intravenous route and opioid, local anesthetic, or both, by the epidural route. These techniques have revolutionized postoperative pain management, which can be continued for hours or days, promoting ambulation and improved bowel function while oral medications are stabilized. Nausea and Vomiting Nausea and vomiting in the postoperative period continue to be a significant problem following general anesthesia and are caused by an action of anesthetics on the chemoreceptor trigger zone and the brainstem vomiting center, which are modulated by serotonin, histamine, ACh muscarinic, and dopamine receptors. The 5-HT3 serotonin receptor antagonist ondansetron (see Chapter 38: Prokinetic Agents, Antiemetics, and Agents Used in Irritable Bowel Syndrome) is very effective in suppressing nausea and vomiting. Common treatment also includes droperidol, metaclopromide, dexamethasone, and avoidance of N2O. The use of propofol as an induction agent and the nonsteroidal antiinflammatory drug ketorolac as a substitute for opioids may decrease the incidence and severity of postoperative nausea and vomiting.

Chapter 14. General Anesthetics *Overview


General anesthetics are a class of drugs used to depress the central nervous system to a sufficient degree to permit the performance ofsurgery and other noxious or unpleasant procedures. Not surprisingly, general anesthetics have very low therapeutic indices and thus are dangerous drugs that require great care in administration. Indeed, an entire specialty of medicine has grown around the administration of this class of drugs. General anesthetics can be administered by a variety of routes, but intravenous or inhalational administration is preferred because effective doses can be

more accurately administered and the time course of action more carefully controlled. While all general anesthetics produce a relatively similar anesthetic state, the drugs are quite dissimilar in their secondary actions (side effects) on other organ systems. The selection of specific drugs and routes of administration to produce general anesthesia is based on their pharmacokinetic properties and on the secondary effects of the various drugs, in the context of the individual patient's age, pathophysiology, and medication use. This chapter will review basic aspects of anesthetic action and then will focus on the specific properties of inhalational and intravenous anesthetics as well as on practical aspects of their use.

*Introduction
Definition of Anesthetic State General anesthetics are a structurally diverse class of drugs that produce a common end pointa behavioral state referred to as general anesthesia. In the broadest sense, general anesthesia can be defined as a global but reversible depression of central nervous system (CNS) function resulting in the loss of response to and perception of all external stimuli. While this definition is appealing in its simplicity, it is not useful for two reasons: First, it is inadequate because anesthesia is not simply a deafferented state; for example, amnesia is an important aspect of the anesthetic state. Second, not all general anesthetics produce identical patterns of deafferentation. Barbiturates, for example, are very effective at producing amnesia and loss of consciousness but are not effective as analgesics. An alternative way of defining the anesthetic state is to consider it as a collection of "component" changes in behavior or perception. The components of the anesthetic state include amnesia, immobility in response to noxious stimulation, attenuation of autonomic responses to noxious stimulation, analgesia, and unconsciousness. It is important to remember that general anesthesia is useful only insofar as it facilitates the performance of surgery or other noxious procedures. The performance of surgery requires an immobilized patient who does not have an excessive autonomic response to surgery (blood pressure, heart rate) and who has amnesia for the procedure. Thus the essential components of the anesthetic state are immobilization, amnesia, and attenuation of autonomic responses to noxious stimulation. Indeed, if an anesthetic produces profound amnesia, it can be difficult, in principle, to determine if it also produces either analgesia or unconsciousness. Measurement of Anesthetic Potency Given the essential requirement that a general anesthetic agent provide an immobilized patient who does not move in response to surgical stimulation, the potency of general anesthetic agents usually is measured by determining the concentration of general anesthetic that prevents movement in response to surgical stimulation. As described in Chapter 13: History and Principles of Anesthesiology, anesthetic potency is measured in MAC units, with 1 MAC defined as the minimum alveolar concentration that prevents movement in response to surgical stimulation in 50% of subjects. The strengths of MAC as a measurement are that (1) it can be monitored continuously by measuring end-tidal anesthetic concentration using infrared spectroscopy or mass spectrometry; (2) it provides a direct correlate of the free concentration of the anesthetic at its site(s) of action in the central nervous system; (3) it is a simple-to-measure endpoint that reflects an important clinical goal. End points other than immobilization also can be used to measure anesthetic potency. For example, the ability to respond to verbal commands (MACawake) (Stoelting et al., 1970) and the ability to form memories (Dwyer et al., 1992) also have been correlated with alveolar anesthetic concentration. Interestingly, verbal response and memory formation are both suppressed at a

fraction of MAC. Furthermore, the ratio of the anesthetic concentrations required to produce amnesia and immobility vary significantly among different inhalational anesthetic agents (nitrous oxide vs. isoflurane, Table 141), suggesting that anesthetic agents may produce these behavioral end points via different cellular and molecular mechanisms. The potency of intravenous anesthetic agents is somewhat more difficult to measure, because there is not an available method to measure blood or plasma anesthetic concentration continuously and because the free concentration of the drug at its site of action cannot be determined. Generally, the potency of intravenous agents is defined as the free plasma concentration (at equilibrium) that produces loss of response to surgical incision (or other end points) in 50% of subjects (Franks and Lieb, 1994). Mechanisms of Anesthesia The molecular mechanisms by which general anesthetics produce their effects have remained one of the great mysteries of pharmacology. For most of the twentieth century, it was theorized that all anesthetics act by a common mechanism (the unitary theory of anesthesia) and that anesthesia is produced by perturbation of the physical properties of cell membranes. This thinking was based largely on observations made in the late nineteenth century that the potency of a gas as an anesthetic correlated with its solubility in olive oil. This correlation, referred to as the Meyer-Overton rule, was interpreted as indicating the lipid bilayer as the likely target of anesthetic action. In the past decade, clear exceptions to the Meyer-Overton rule have been noted (Koblin et al., 1994). For example, it has been shown that inhalational and intravenous anesthetics can be enantioselective in their action as anesthetics (etomidate, steroids, isoflurane) (Tomlin et al., 1998; Lysko et al., 1994; Wittmer et al., 1996). The fact that enantiomers have unique actions but identical physical properties indicates that properties other than bulk solubility are important in determining anesthetic action. This realization has focused thinking on identification of specific protein binding sites for anesthetics. One impediment to understanding the mechanisms of anesthesia has been the difficulty in precisely defining anesthesia. It has now become clear that an anesthetic agent produces different components of the anesthetic state via actions at different anatomic loci in the nervous system and may produce these component effects via different cellular and/or molecular actions. It also is becoming clear that different anesthetic agents can produce a specific component of anesthesia via actions at different molecular targets. Given these insights, the unitary theory of anesthesia has been largely discarded. The ensuing section will focus on the identification of specific anatomic, cellular, and molecular targets of anesthetic action. The complete mechanism(s) of anesthetic action have not been defined. The most difficult issue is mapping the effects of anesthetics on specific molecular targets to the complex component behaviors that compose anesthesia. This is a particularly vexing problem for poorly understood behaviors such as consciousness (see Chapter 13: History and Principles of Anesthesiology). Anatomic Sites of Anesthetic Action General anesthetics could, in principle, interrupt nervous system function at myriad levels, including peripheral sensory neurons, the spinal cord, the brain stem, and the cerebral cortex. Delineation of the precise anatomic sites of action is difficult because many anesthetics diffusely inhibit electrical activity in the CNS. For example, isoflurane at 2 MAC can cause electrical silence in the brain (Newberg et al., 1983)! Despite this, in vitro studies have shown that specific cortical pathways exhibit markedly different sensitivities to both inhalational and intravenous anesthetics (MacIver and Roth, 1988; Richards and White, 1975; Nicoll, 1972). This suggests that anesthetics may produce specific components of the anesthetic state via actions at specific sites in the CNS. Consistent with this possibility, studies by Rampil (1994) and Antognini and Schwartz (1993) have

demonstrated that immobilization in response to a surgical incision (the end point used in determining MAC) results from inhalational anesthetic action in the spinal cord. It is unlikely that amnesia or unconsciousness are the result of anesthetic actions in the spinal cord; thus different components of anesthesia are produced at different sites in the CNS. One intravenous anesthetic, dexmedetomidine (an 2-adrenergic receptor agonist), has been shown to produce unconsciousness via actions in the locus coeruleus (Mizobe et al., 1996). While the sites at which other intravenous and inhalational anesthetics produce unconsciousness have not been identified, inhalational anesthetics have been shown recently to depress the excitability of thalamic neurons (Ries and Puil, 1999). This suggests the thalamus as a potential locus for the sedative effects of inhalational anesthetics, since blockade of thalamocortical communication would produce unconsciousness. Finally, both intravenous and inhalational anesthetics depress hippocampal neurotransmission (Kendig et al., 1991). This provides a probable locus for the amnesic effects of anesthetics. Physiological Mechanisms of Anesthesia General anesthetics produce two important physiologic effects at the cellular level. First, the inhalational anesthetics can hyperpolarize neurons (Nicoll and Madison, 1982). This may be an important effect on neurons serving a pacemaker role and on pattern-generating circuits. It also may be important in synaptic communication, since reduced excitability in a postsynaptic neuron may reduce the likelihood that an action potential will be initiated in response to neurotransmitter release. Second, both inhalational and intravenous anesthetics have substantial effects on synaptic function. In this regard it is noteworthy that anesthetics appear to have minimal effects on actionpotential generation or propagation at concentrations that affect synapses (Larrabee and Posternak, 1952). The inhalational anesthetics have been shown to inhibit excitatory synapses and enhance inhibitory synapses in various preparations. It seems likely that these effects are produced by both pre- and postsynaptic actions of the inhalational anesthetics. There is clear evidence that the inhalational anesthetic isoflurane can inhibit neurotransmitter release (Perouansky et al., 1995; MacIver et al., 1996); this may be mediated via an effect on the neurosecretory machinery (van Swinderen et al., 1999). It also is abundantly clear that inhalational anesthetics can act postsynaptically, altering the response to released neurotransmitter. These actions are thought to be due to specific interactions of anesthetic agents with neurotransmitter receptors. The intravenous anesthetics produce a narrower range of physiological effects. Their predominant actions are at the synapse, where they have profound but relatively specific effects on the postsynaptic response to released neurotransmitter. Most of the intravenous agents act predominantly by enhancing inhibitory neurotransmission, whereas ketamine predominantly inhibits excitatory neurotransmission at glutamatergic synapses. Molecular Actions of General Anesthetics The electrophysiological effects of general anesthetics at the cellular level suggest several potential molecular targets for anesthetic action. There is strong evidence supporting ligand-gated ion channels as important targets for anesthetic action. Chloride channels gated by the inhibitory neurotransmitter gamma-aminobutyric acid (GABAA receptors; see Chapter 17: Hypnotics and Sedatives) are sensitive to clinical concentrations of a wide variety of anesthetics, including the halogenated inhalational agents and many intravenous agents (propofol, barbiturates, etomidate, and neurosteroids) (Krasowski and Harrison, 1999). At clinical concentrations, general anesthetics increase the sensitivity of the GABAA receptor to GABA, thus enhancing inhibitory neurotransmission and depressing nervous system activity. It appears likely that action of anesthetics on the GABAA receptor is mediated by binding of the anesthetics to specific sites on the GABAA-receptor protein, as point mutations on the receptor can eliminate the effects of the

anesthetic on ion channel function (Mihic et al., 1997). It also seems likely that there are specific binding sites for at least several classes of anesthetics, as mutations in various regions (and subunits) of the GABAA receptor selectively affect the actions of various anesthetics (Belelli et al., 1997; Krasowski and Harrison, 1999). It should be noted that none of the general anesthetics competes with GABA for its binding site on the receptor. Which components of anesthesia are mediated by actions of anesthetics on GABAA receptors remains a subject of conjecture. The fact that GABA mimetics themselves can produce unconsciousness suggests a role for GABAA receptors in mediating the hypnotic effects of general anesthetics (Cheng and Brunner, 1985). Closely related to the GABAA receptors are other ligand-gated ion channels including glycine receptors and neuronal nicotinic acetylcholine receptors. Clinical concentrations of the inhalational anesthetics enhance the ability of glycine to activate glycine-gated chloride channels (glycine receptors), which play an important role in inhibitory neurotransmission in the spinal cord and brain stem. Propofol (Hales and Lambert, 1988), neurosteroids, and barbiturates also potentiate glycineactivated currents, whereas etomidate and ketamine do not (Mascia et al., 1996). Glycine receptors may play a role in mediating inhibition by anesthetics of responses to noxious stimuli. Subanesthetic concentrations of the inhalational anesthetics inhibit some classes of neuronal nicotinic acetylcholine receptors (Violet et al., 1997, Flood et al., 1997). The neuronal nicotinic receptors may play a role in mediating the analgesic effects of inhalational anesthetic agents. The only general anesthetics that do not have significant effects on GABAA or glycine receptors are ketamine, nitrous oxide, and xenon. All of these agents have been shown to inhibit a different type of ligand-gated ion channel, the N-methyl-D-aspartate (NMDA) receptor (see Chapter 12: Neurotransmission and the Central Nervous System). NMDA receptors are glutamate-gated cation channels that are somewhat selective for calcium and are involved in long-term modulation of synaptic responses (long-term potentiation) and glutamate-mediated neurotoxicity. Ketamine inhibits NMDA receptors by binding to the phencyclidine site on the NMDA-receptor protein (Lodge et al., 1982; Anis et al., 1983; Zeilhofer et al., 1992). The NMDA receptor is thought to be the principal molecular target for the anesthetic actions of ketamine. Recent studies also show that nitrous oxide (Mennerick et al., 1998; Jevtovic-Todorovic et al., 1998) and xenon (Franks et al., 1998; de Sousa et al., 2000) are potent and selective inhibitors of NMDA-activated currents, suggesting that these agents also may produce unconsciousness via actions on NMDA receptors. Inhalational anesthetics have two other identified molecular targets that may be important in some of their actions. Some members of a class of potassium channels known as two-pore domain channels are activated by inhalational anesthetics (Gray et al., 1998; Patel et al., 1999). These channels are important in setting the resting membrane potential of a neuron and may be the molecular locus through which these agents hyperpolarize neurons. A second target is the molecular machinery involved in neurotransmitter release. Recent evidence shows that the action of inhalational anesthetics requires a protein complex (syntaxin, SNAP-25, synaptobrevin) involved in synaptic neurotransmitter release (van Swinderen et al., 1999). These molecular interactions may explain the ability of inhalational anesthetics to cause presynaptic inhibition in the hippocampus, and could contribute to the amnesic effect of inhalational anesthetics. Summary Current evidence supports the view that most of the intravenous general anesthetics act predominantly through GABAA receptors and perhaps through some interactions with other ligandgated ion channels. The halogenated inhalational agents have a variety of molecular targets, consistent with their status as complete (all components) anesthetics. Nitrous oxide, ketamine, and xenon constitute a third category of general anesthetics that are likely to produce unconsciousness

via inhibition of the NMDA receptor.

*Parenteral Anesthetics
Pharmacokinetic Principles Parenteral anesthetics are small, hydrophobic, substituted aromatic or heterocyclic compounds (Figure 141). Hydrophobicity is the key factor governing the pharmacokinetics of this class of drugs (Bischoff and Dedrick, 1968; Burch and Stanski, 1983; Shafer and Stanski, 1992). After a single intravenous bolus, each of these drugs preferentially partitions into the highly perfused and lipophilic brain and spinal cord tissue where it produces anesthesia within a single circulation time. Subsequently, blood levels fall rapidly, resulting in redistribution of anesthetic out of the central nervous system back into the blood, where it then diffuses into less-well-perfused tissues such as muscle, viscera and, at a slower rate, into the poorly perfused but very hydrophobic adipose tissue. Termination of anesthesia after single boluses of parenteral anesthetics is primarily by redistribution out of the nervous system rather than by metabolism (for example, see Figure 142). After redistribution, anesthetic blood levels fall according to a complex interaction between the metabolic rate and the amount and lipophilicity of the drug stored in the peripheral compartments (Hughes et al., 1992; Shafer and Stanski, 1992). Thus, parenteral anesthetic half-lives are "context-sensitive," and the degree to which a half-life is contextual varies greatly from drug to drug as might be predicted based on their markedly different hydrophobicities and metabolic clearances (Table 142; and Figure 143). For example, after a single bolus of thiopental, patients usually emerge from anesthesia within 10 minutes; however, a patient may require more than a day to awaken from a prolonged thiopental infusion. The majority of individual variability in sensitivity to parenteral anesthetics can be accounted for by pharmacokinetic factors (Wada et al., 1997; Wulfsohn and Joshi, 1969). For example, in patients with lower cardiac output, the relative perfusion of and fraction of anesthetic dose delivered to the brain is higher; thus, patients in septic shock and those with cardiomyopathy usually require lower doses of anesthetic. Elderly patients typically require a smaller anesthetic dose, primarily because of a smaller initial volume of distribution (Arden et al., 1986; Homer and Stanski, 1985). As described below, similar principles govern the pharmacokinetics of the hydrophobic inhalational anesthetics with the added complexity of drug uptake by inhalation. Figure 141. Structures of Parenteral Anesthetics.

Figure 142. Thiopental Serum Levels after a Single Intravenous Induction Dose. Thiopental serum levels after a bolus can be described by two time constants, t1/2 and t1/2 . The initial fall is rapid (t1/2 < 10 min) and is due to redistribution of drug from the plasma and the highly perfused brain and spinal cord into less well-perfused tissues such as muscle and fat. During this redistribution phase, serum thiopental concentration falls to levels (AL awakening level) where patients awaken (see insetthe average thiopental serum concentration in 12 patients after a 6 mg/kg intravenous bolus of thiopental). Subsequent metabolism and elimination is much slower and is characterized by a half-life (t1/2 of more than 10 hours. (Adapted with permission from Burch and Stanski, 1983.)

Figure 143. Context-Sensitive Half-Time of General Anesthetics. The duration of action of single intravenous doses of anesthetic/hypnotic drugs is similarly short for all and is determined by redistribution of the drugs away from their active sites. However, after prolonged infusions, drug half-lives and durations of action are dependent on a complex interaction between the rate of redistribution of the drug, the amount of drug accumulated in fat, and the drug's metabolic rate. Thus, drug half-lives vary greatly. This phenomenon has been termed the context-sensitive half-time (that is, the half-time of a drug can only be estimated if one knows the contextthe total dose and over what time it has been given). Note that the half-times of some drugs such as etomidate, propofol, and ketamine increase only modestly with prolonged infusions; others (e.g., diazepam and thiopental) increase dramatically. (Reproduced with permission from Reves et al., 1994.)

Barbiturates Chemistry and Formulations Anesthetic barbiturates are derivatives of barbituric acid (2,4,6-trioxohexahydropyrimidine), with either an oxygen or sulfur at the 2-position (Figure 141). The three barbiturates used for clinical anesthesia are sodium thiopental (PENTOTHAL ), thiamylal (SURITAL ), and methohexital (BREVITAL ). Sodium thiopental is the most frequently used of the barbiturates for inducing anesthesia. All three barbiturate anesthetics are supplied as racemic mixtures despite enantioselectivity in their anesthetic potency (Andrews and Mark, 1982; Christensen and Lee, 1973; Nguyen et al., 1996). Barbiturates are formulated as the sodium salts with 6% sodium carbonate and reconstituted in water or isotonic saline solution to produce 1% (methohexital), 2% (thiamylal), or 2.5% (thiopental) alkaline solutions with pHs of 10 to 11. Once reconstituted, the thiobarbiturates are stable in solution for up to 1 week and methohexital for up to 6 weeks if refrigerated. Mixing with more acidic drugs commonly used during anesthetic induction can result in precipitation of the barbiturate as the free acid; thus, standard practice is to delay the administration of other drugs until the barbiturate has cleared the intravenous tubing. Pharmacokinetics Pharmacokinetic parameters for each drug are given in Table 142. As discussed above, the principal mechanism limiting anesthetic duration after single doses is redistribution of these hydrophobic drugs from the brain to other tissues. However, after multiple doses or infusions, the duration of action of the barbiturates varies considerably depending on their clearances. Methohexital differs from the other two barbiturates in its much more rapid clearance; thus, it accumulates less during prolonged infusions (Schwilden and Stoeckel, 1990). Prolonged infusions or very large doses of thiopental and thiamylal can produce unconsciousness lasting several days because of their slow elimination and large volume of distributions (Stanski et al., 1980). Even single induction doses of thiopental and, to a lesser degree, methohexital can produce psychomotor impairment lasting up to 8 hours (Beskow et al., 1995; Korttila et al., 1975). Methohexital had been used frequently for outpatient procedures where rapid return to an alert state is particularly desirable, but this role now has been filled largely by the anesthetic propofol (see below). All three drugs are eliminated primarily by hepatic metabolism and renal excretion of inactive metabolites (Broadie et al., 1950); a small fraction of thiopental undergoes a desulfuration reaction to the longer-acting hypnotic pentobarbital (Chan et al., 1985). Each drug is highly protein bound (Table 142). Hepatic disease or other conditions that reduce serum protein concentrations will decrease the volume of distribution and thereby increase the initial free concentration and hypnotic effect of an induction dose (Ghoneim and Pandya, 1975). Clinical Use Recommended intravenous doses for all three drugs in a healthy young adult are given in Table 14 2. The typical induction dose of thiopental (3 to 5 mg/kg) produces unconsciousness in 10 to 30 seconds with a peak effect in one minute and duration of anesthesia of 5 to 8 minutes (Dundee et al., 1982). Neonates and infants usually require a higher induction dose (5 to 8 mg/kg), whereas elderly and pregnant patients require less (1 to 3 mg/kg) (Gin et al., 1997; Homer and Stanski, 1985; Jonmarker et al., 1987). Dosage calculation based on lean body mass reduces individual variation in dosage requirements. Doses can be reduced 10% to 50% after premedication with benzodiazepines, opioids, and/or 2-adrenergic receptor agonists because of their additive hypnotic effect (Nishina et al., 1994; Short et al., 1991; Wang et al., 1996). Thiamylal is approximately

equipotent with and in all aspects very similar to thiopental (Tovell et al., 1955). Methohexital is three times as potent as but otherwise similar to thiopental in onset and duration of action (Thornton, 1970; Tovell et al., 1955). Thiopental and thiamylal produce little or no pain on injection; methohexital elicits mild pain. Venoirritation can be reduced by injection into larger nonhand veins and by prior intravenous injection of lidocaine (0.5 to 1 mg/kg). Intraarterial injection of thiobarbiturates can induce a severe inflammatory and potentially necrotic reaction and should be avoided (Dohi and Naito, 1983; Waters, 1966). Thiopental often evokes the taste of garlic just prior to inducing anesthesia (Nor et al., 1996). Methohexital and, to a lesser degree, the other barbiturates can produce excitement phenomena such as muscle tremor, hypertonus, and hiccoughs (Clarke, 1981; Thornton, 1970). For induction of pediatric patients without intravenous access, all three drugs can be given by rectum at approximately 10 times the intravenous dose. Side Effects Nervous System Besides producing general anesthesia, barbiturates dose-dependently reduce cerebral metabolic rate as measured by cerebral oxygen utilization (cerebral metabolic rate for oxygen; CMRO2). Induction doses of thiopental reduce CMRO2 about 25% to 30% with a maximal decrease of 55% occurring at about 2 to 5 times the induction dose (Pierce et al., 1962; Stullken et al., 1977). As a consequence of the decrease in CMRO2, cerebral blood flow and intracranial pressure are similarly reduced Nussmeier et al., 1986). Thiopental also reduces intraocular pressure (Joshi and Bruce, 1975). Presumably because of their CNS-depressant activity, barbiturates are effective anticonvulsants (see Chapter 21: Drugs Effective in the Therapy of the Epilepsies). Thiopental in particular is of proven value in the treatment of status epilepticus (Modica et al., 1990). Cardiovascular System The anesthetic barbiturates produce dose-dependent decreases in blood pressure. The effect primarily is caused by vasodilation, particular venodilation, and to a lesser degree by a mild direct decrease in myocardial contractility (Elder et al., 1955; Etsten and Li, 1955; Fieldman et al., 1955). Typically, heart rate increases as a compensatory response to a lower blood pressure, although barbiturates do blunt the baroreceptor reflex (Bristow et al., 1969). Drops in blood pressure can be severe in patients with impaired ability to compensate for venodilation, such as those with hypovolemia, cardiomyopathy, valvular heart disease, coronary artery disease, cardiac tamponade, or -adrenergic-receptor blockade. Thiopental is not necessarily contraindicated in patients with coronary artery disease, because the ratio of myocardial oxygen supply to demand appears to be adequately maintained within a patient's normal blood pressure range (Reiz et al., 1981). None of the barbiturates has been shown to be arrythmogenic. Respiratory System Barbiturates are respiratory depressants. Induction doses of thiopental decrease minute ventilation and tidal volume with a smaller and inconsistent reduction in respiratory rate (Grounds et al., 1987). Reflex responses to hypercarbia and hypoxia are diminished by anesthetic barbiturates (Gross et al., 1983; Hirshman et al., 1975), and apnea can result at higher doses or in the presence of other respiratory depressants such as opioids. With the exception of uncommon anaphylactoid reactions, these drugs have little effect on bronchomotor tone and can be used safely in asthmatic patients (Kingston and Hirshman, 1984).

Other Side Effects Short-term administration of barbiturates has no clinically significant effects on the hepatic, renal, or endocrine systems. A single induction dose of thiopental does not alter gravid uterine tone but produces mild, transient depression of activity of the newborn (Kosaka et al., 1969). True allergies to barbiturates are rare (Baldo et al., 1991); however, drug-induced histamine release occasionally is seen (Hirshman et al., 1982; Sprung et al., 1997). Barbiturates can induce fatal attacks of porphyria in patients with acute intermittent or variegate porphyria and are contraindicated in such patients (Dundee et al., 1962). Unlike inhalational anesthetics and succinylcholine, barbiturates and all other parenteral anesthetics do not appear to trigger malignant hyperthermia (Rosenberg et al., 1997). Propofol Chemistry and Formulations Along with thiopental, propofol (DIPRIVAN ) is the most commonly used parenteral anesthetic. Propofol, 2,6-diisopropylphenol, is essentially insoluble in aqueous solutions and is formulated only for intravenous administration as a 1% (10 mg/ml) emulsion in 10% soybean oil, 2.25% glycerol, and 1.2% purified egg phospholipid. In the United States, disodium EDTA (0.05 mg/ml) or sodium metabisulfite (0.25 mg/ml) is added to inhibit bacterial growth. Nevertheless, significant bacterial contamination of open containers still has been reported and associated with serious infection (Bennett et al., 1995); propofol should be administered shortly after removal from sterile packaging or discarded. Pharmacokinetics The pharmacokinetics of propofol are governed by the same principles that apply to barbiturates. Onset and duration of anesthesia after a single bolus are similar to those of thiopental (Langley and Heel, 1988). However, recovery after multiple doses or infusion has been shown to be much faster after propofol than after thiopental or even methohexital (Doze et al., 1986; Langley and Heel, 1988). The rapid rate of recovery after infusion of propofol can be explained by its very high clearance coupled with the slow diffusion of drug from the peripheral to the central compartment (Figure 143). The rapid clearance of propofol explains its less severe hangover compared to barbiturates and may allow for a more rapid discharge from the recovery room (Bryson et al., 1995). Propofol is metabolized primarily in the liver to less-active metabolites that are renally excreted (Simons et al., 1988); however, its clearance exceeds hepatic blood flow, and extrahepatic metabolism has been demonstrated (Veroli et al., 1992). Propofol is highly protein bound, and its pharmacokinetics, like that of the barbiturates, may be affected by conditions that alter serum protein levels (Kirkpatrick et al., 1988). Clinical Use The induction dose of propofol in a healthy adult is 1.5 to 2.5 mg/kg. Propofol has an onset and duration of anesthesia similar to those of thiopental (Table 142). As with barbiturates, dosages should be reduced in elderly patients and in the presence of other sedatives and increased in young children (Aun et al., 1992; Dundee et al., 1986). Propofol often is used for maintenance of anesthesia as well as induction. For short procedures, small boluses (10% to 50% of the induction dose) every 5 minutes or as needed are effective. Because they produce more stable drug levels, propofol infusions (100 to 300 g/kg per minute) are better suited for longer-term anesthetic maintenance. Infusion rates should be tailored to patient response and the levels of other hypnotics.

Sedating doses of propofol are 20% to 50% of those required for general anesthesia. However, even at these lower doses, caregivers should be vigilant and prepared for all of the side effects of propofol discussed below, particularly airway obstruction and apnea. Propofol elicits pain on injection, which can be reduced with lidocaine and the use of larger arm and antecubital veins (McCulloch and Lees, 1985; Picard and Tramer, 2000). Excitatory phenomena during induction with propofol occur at about the same frequency as with thiopental but much less frequently than with methohexital (Langley and Heel, 1988). Side Effects Nervous System The central nervous system effects of propofol are similar to those of barbiturates. Propofol decreases CMRO2, cerebral blood flow, and intracranial and intraocular pressures by about the same amount as does thiopental (Langley and Heel, 1988; Ravussin et al., 1988; Vandesteene et al., 1988). Like thiopental, propofol has been used in patients at risk for cerebral ischemia (Ravussin and de Tribolet, 1993); however, no human outcome studies have been performed to determine propofol's efficacy as a neuroprotectant. Results from studies on the anticonvulsant effects of propofol have been mixed, with some data even suggesting that it has proconvulsant activity when combined with other drugs (Modica et al., 1990). Thus, unlike thiopental, propofol is not a proven acute intervention for seizures. Cardiovascular System Propofol produces a dose-dependent decrease in blood pressure that is significantly greater than that produced by thiopental (Grounds et al., 1985; Langley and Heel, 1988). The fall in blood pressure can be explained by both vasodilation and mild depression of myocardial contractility (Claeys et al., 1988; Grounds et al., 1985). Propofol appears to blunt the baroreceptor reflex and/or is directly vagotonic, because smaller increases in heart rate are seen for any given drop in blood pressure after doses of propofol (Claeys et al., 1988; Langley and Heel, 1988). As with thiopental, propofol should be used with caution in patients at risk for or intolerant of decreases in blood pressure. Respiratory and Other Side Effects At equianesthetic doses, propofol produces a slightly greater degree of respiratory depression than does thiopental (Blouin et al., 1991; Taylor et al., 1986). Patients given propofol should be monitored to ensure adequate oxygenation and ventilation. Propofol appears to be less likely than barbiturates to provoke bronchospasm (Eames et al., 1996; Pizov et al., 1995). It has no clinically significant effects on hepatic, renal, or endocrine organ systems. Unlike thiopental, propofol appears to have significant antiemetic action and is a good choice for sedation or anesthesia in patients at high risk for nausea and vomiting (Gan et al., 1996; McCollum et al., 1988). Propofol provokes anaphylactoid reactions and histamine release at about the same low frequency as does thiopental (Bryson et al., 1995; Laxenaire et al., 1992). Although propofol does cross placental membranes, it is considered safe for use in pregnant patients and transiently depresses activity of the newborn similarly to thiopental (Abboud et al., 1995). Etomidate Chemistry and Formulation Etomidate (AMIDATE) is a substituted imidazole that is supplied as the active D-isomer (Figure 14

1). Etomidate is poorly soluble in water and is formulated as a 2-mg/ml solution in 35% propylene glycol. Unlike thiopental, etomidate does not induce precipitation of neuromuscular blocking agents or other drugs frequently given during anesthetic induction (Hadzija and Lubarsky, 1995). Pharmacokinetics An induction dose of etomidate has a rapid onset and redistribution-limited duration of action (Table 142). Metabolism of etomidate occurs in the liver, where it is primarily hydrolyzed to inactive compounds (Gooding and Corssen, 1976; Heykants et al., 1975). Elimination is both renal (78%) and biliary (22%). Compared to thiopental, the duration of action of etomidate increases less with repeated doses (Figure 143). The plasma-protein binding of etomidate is high but less than that of barbiturates and propofol (Table 142). Clinical Use Etomidate primarily is used for anesthetic induction of patients at risk for hypotension. Induction doses of etomidate (0.2 to 0.4 mg/kg) have a rapid onset and a short duration of action (Table 14 2); they are accompanied by a high incidence of pain on injection and myoclonic movements (Giese and Stanley, 1983). As with propofol, lidocaine effectively reduces the pain of injection (Galloway et al., 1982). The myoclonic movements can be reduced by premedication with either benzodiazepines or opioids (Zacharias et al., 1979). Etomidate is pharmacokinetically suitable for infusion for anesthetic maintenance (10 g/kg per minute) or sedation (5 g/kg per minute) (Fragen et al., 1983); however, long-term infusions are not recommended for reasons discussed below. Etomidate also may be given by rectum (6.5 mg/kg) with an onset of about 5 minutes (Linton and Thornington, 1983). Side Effects Nervous System The effects of etomidate on cerebral blood flow, metabolism, and intracranial and intraocular pressures are similar to those of thiopental (Modica and Tempelhoff, 1992; Renou et al., 1978; Thomson et al., 1982). Etomidate has been tried as a protective agent against cerebral ischemia (Batjer, 1993). However, animal studies have failed to show a consistent beneficial effect (Drummond et al., 1995; Guo et al., 1995), and no controlled human trials have been performed. Etomidate has been shown in some studies to be a proconvulsant and is not a proven treatment for seizures (Modica et al., 1990). Cardiovascular System Cardiovascular stability after induction is a major advantage of etomidate over either barbiturates or propofol. Induction doses of etomidate typically produce a small increase in heart rate and little to no decrease in blood pressure or cardiac output (Criado et al., 1980; Gooding and Corssen, 1977; Gooding et al., 1979). Etomidate has little effect on coronary perfusion pressure and reduces myocardial oxygen consumption (Kettler et al., 1974). Thus, of all induction agents, etomidate is best suited to maintain cardiovascular stability in patients with coronary artery disease, cardiomyopathy, cerebral vascular disease, and/or hypovolemia. Respiratory and Other Side Effects The degree of respiratory depression by etomidate appears to be less than that by thiopental (Colvin

et al., 1979; Morgan et al., 1977). Like methohexital, it sometimes induces hiccups but does not significantly stimulate histamine release (Doenicke et al., 1973; Zacharias et al., 1979). Despite minimal cardiac and respiratory effects, etomidate does have two major drawbacks. First, etomidate has been associated with a significant increase in nausea and vomiting (Fragen and Caldwell, 1979). A second problem was discovered when an increase in the mortality of intensive-care-unit patients sedated with etomidate infusions was observed (Ledingham and Watt, 1983). The increased mortality was linked to suppression of the adrenocortical stress response (Ledingham et al., 1983). Indeed, etomidate inhibits certain adrenal biosynthetic enzymes required for the production of cortisol and some other steroids. Even single induction doses of etomidate may mildly and transiently reduce cortisol levels (Allolio et al., 1985; Fragen et al., 1984; Wagner et al., 1984), but no significant differences in outcome after short-term administration have been found even for variables specifically known to be associated with adrenocortical suppression (Wagner et al., 1984). Thus, while etomidate is not recommended for long-term infusion, it appears to be safe for anesthetic induction and has some unique advantages in patients prone to hemodynamic instability. Ketamine Chemistry and Formulation Ketamine (KETALAR) is an arylcyclohexylamine, a congener of phencyclidine (Figure 141). It is supplied as a racemic mixture, despite the (S)-isomer being more potent and having less side effects than the (R)-isomer (White et al., 1982). Although more lipophilic than thiopental, ketamine is water-soluble and available as 10, 50, and 100 mg/ml in sodium chloride solution plus the preservative benzethonium chloride. Pharmacokinetics The onset and duration of an induction dose of ketamine is determined by the same distribution/ redistribution mechanism operant for all the other parenteral anesthetics. Ketamine is hepatically metabolized to norketamine, which has reduced CNS activity; norketamine is further metabolized and excreted in the urine and bile (Chang and Glazko, 1974). Ketamine has a large volume of distribution and rapid clearance that makes it suitable for continuous infusion without the drastic lengthening in duration of action seen with thiopental (Table 142 and Figure 143). Protein binding is much lower with ketamine than with the other parenteral anesthetics (Table 142). Clinical Use Ketamine has unique properties that make it useful for certain pediatric procedures and for anesthetizing patients at risk for hypotension or bronchospasm. However, it has significant side effects that limit its routine use. Ketamine rapidly produces a hypnotic state distinct from that of other anesthetics. Patients have profound analgesia, unresponsiveness to commands, and amnesia but may have their eyes open, move their limbs involuntarily, and usually have spontaneous respiration. This cataleptic state has been termed dissociative anesthesia. Ketamine is typically administered intravenously but also is effective by intramuscular, oral, and rectal routes. The induction doses are 0.5 to 1.5 mg/kg intravenously, 4 to 6 mg/kg intramuscularly, and 8 to 10 mg/kg rectally (White et al., 1982). The onset of action after an intravenous dose is similar to that of the other parenteral anesthetics, but the duration of anesthesia of a single dose is longer (Table 142). For anesthetic maintenance, ketamine occasionally is continued as an infusion (25 to 100 g/kg per minute) (White et al., 1982). Ketamine does not elicit pain on injection or true excitatory behavior as described for methohexital, although involuntary movements produced by ketamine can be

mistaken for anesthetic excitement. Side Effects Nervous System As mentioned, ketamine has behavioral effects distinct from those of other anesthetics. The ketamine-induced cataleptic state is accompanied by nystagmus with pupillary dilation, salivation and/or lacrimation, and spontaneous limb movements with increased overall muscle tone. Although ketamine does not produce the classic anesthetic state, patients are anesthetized in that they are amnestic and unresponsive to painful stimuli. Indeed, ketamine produces profound analgesia, a distinct advantage over other parenteral anesthetics (White et al., 1982). Unlike other parenteral anesthetics, ketamine increases cerebral blood flow and intracranial pressure with minimal alteration of cerebral metabolism (Gardner et al., 1971; Takeshita et al., 1972; Wyte et al., 1972). These effects can be attenuated by concurrent administration of thiopental and/or benzodiazepines along with hyperventilation (Belopavlovic and Buchthal, 1982; Mayberg et al., 1995). However, given that other anesthetics actually reduce intracranial pressure and cerebral metabolism, ketamine is relatively contraindicated for patients with increased intracranial pressure or those at risk for cerebral ischemia. In some studies, ketamine has been shown to increase intraocular pressure, and its use for induction of patients with open eye injuries is controversial (Whitacre and Ellis, 1984). The effects of ketamine on seizure activity appear to be mixed, with neither strong pronor anticonvulsant activity (Modica et al., 1990). Emergence delirium characterized by hallucinations, vivid dreams, and illusions is a frequent complication of ketamine that can result in serious patient dissatisfaction and can complicate postoperative management (White et al., 1982). Delirium symptoms are most frequent in the first hour after emergence and occur less frequently in children (Sussman, 1974). Benzodiazepines reduce the incidence of emergence delirium (Dundee and Lilburn, 1978). Cardiovascular System Unlike other anesthetics, induction doses of ketamine typically increase blood pressure, heart rate, and cardiac output (Stanley et al., 1968). The cardiovascular effects are indirect and are most likely mediated by inhibition of both central and peripheral catecholamine reuptake (White et al., 1982). Ketamine has direct negative inotropic and vasodilating activity, but these effects usually are overwhelmed by the indirect sympathomimetic action (Pagel et al., 1992). Thus, ketamine is a useful drug in patients at risk for hypotension during anesthesia. While not arrythmogenic, ketamine increases myocardial oxygen consumption and is not an ideal drug for patients at risk for myocardial ischemia (Reves et al., 1978). Respiratory System The respiratory effects of ketamine are perhaps the best indication for its use. Induction doses of ketamine produce small and transient decreases in minute ventilation, but respiratory depression is less severe than with other general anesthetics (White et al., 1982). Ketamine is a potent bronchodilator due to its indirect sympathomimetic activity and perhaps some direct bronchodilating activity (Hirshman et al., 1979; Wanna and Gergis, 1978). Thus, ketamine is particularly well suited for anesthetizing patients at high risk for bronchospasm. Summary of Parenteral Anesthetics Parenteral anesthetics are the most commonly used drugs for anesthetic induction of adults. Their

lipophilicity coupled with the relatively high perfusion of the brain and spinal cord results in a rapid onset and short duration after a single bolus dose. However, these drugs ultimately accumulate in fatty tissue, prolonging the patient's recovery if multiple doses are given, particularly for drugs with lower rates of clearance. Each anesthetic has its own unique set of properties and side effects (summarized in Table 143). Thiopental and propofol are the two most commonly used parenteral agents. Thiopental has a long-established track record of safety. Propofol is advantageous for procedures where rapid return to a preoperative mental status is desirable. Etomidate usually is reserved for patients at risk for hypotension and/or myocardial ischemia. Ketamine is best suited for patients with asthma and/or for children undergoing short, painful procedures.

*Inhalational Anesthetics
Introduction A wide variety of gases and volatile liquids can produce anesthesia. The first widely used inhalational anesthetic was diethyl ether (see Chapter 13: History and Principles of Anesthesiology). Subsequently, a variety of structurally unrelated compounds have been used as inhalational anesthetics including cyclopropane, elemental xenon, nitrous oxide, and more recently, short-chain halogenated alkanes and ethers. The structures of the currently used inhalational anesthetics are shown in Figure 144. One of the troublesome properties of the inhalational anesthetics is their low safety margin. The inhalational anesthetics have therapeutic indices (LD50 /ED50 ) that range from 2 to 4, making these among the most dangerous drugs in clinical use. The toxicity of these drugs is largely a function of their side effects, and each of the inhalational anesthetics has a unique side-effect profile. Hence, the selection of an inhalational anesthetic often is based on matching a patient's pathophysiology with drug side-effect profiles. The specific adverse effects of each of the inhalational anesthetics are emphasized in the following sections. The inhalational anesthetics also vary widely in their physical properties. Table 141 lists the important physical properties of the inhalational agents in clinical use. These properties are important because they govern the pharmacokinetics of the inhalational agents. Ideally, an inhalational agent would produce a rapid induction of anesthesia and a rapid recovery following discontinuation. The pharmacokinetics of the inhalational agents is reviewed in the following section. Figure 144. Structures of Inhalational General Anesthetics. Note that all inhalational general anesthetic agents except nitrous oxide and halothane are ethers and that fluorine progressively replaces other halogens in the development of the halogenated agents. All structural differences are associated with important differences in pharmacological properties.

Pharmacokinetic Principles The inhalational agents are some of the very few pharmacological agents administered as gases. The fact that these agents behave as gases rather than as liquids requires that different pharmacokinetic constructs be used in analyzing their uptake and distribution. It is essential to understand that inhalational anesthetics distribute between tissues (or between blood and gas) such that equilibrium is achieved when the partial pressure of anesthetic gas is equal in the two tissues. When a person has breathed an inhalational anesthetic for a sufficiently long time that all tissues are equilibrated with the anesthetic, the partial pressure of the anesthetic in all tissues will be equal to the partial pressure of the anesthetic in inspired gas. It is important to note that while the partial pressure of the anesthetic may be equal in all tissues, the concentration of anesthetic in each tissue will be different. Indeed, anesthetic partition coefficients are defined as the ratio of anesthetic concentration in two tissues when the partial pressures of anesthetic are equal in the two tissues. Blood:gas, brain:blood, and blood:fat partition coefficients for the various inhalational agents are listed in Table 141. These partition coefficients show that inhalational anesthetics are more soluble in some tissues (e.g., fat) than they are in other (e.g., blood), and that there is significant range in the solubility of the various inhalational agents in such tissues. In clinical practice, one can monitor the equilibration of a patient with anesthetic gas. Equilibrium is achieved when the partial pressure in inspired gas is equal to the partial pressure in end-tidal (alveolar) gas. This defines equilibrium, because it is the point when there is no net uptake of anesthetic from the alveoli into the blood. For inhalational agents that are not very soluble in blood or any other tissue, equilibrium is achieved quickly, as illustrated for nitrous oxide in Figure 145. If an agent is more soluble in a tissue such as fat, equilibrium may take many hours to reach. This occurs because fat represents a huge reservoir for the anesthetic, which will be filled slowly because of the modest blood flow to fat. This is illustrated by the slow approach of halothane alveolar partial pressure to inspired partial pressure in Figure 145.

Figure 145. Uptake of Inhalational General Anesthetics. The rise in alveolar (FA ) anesthetic concentration toward the inspired (FI) concentration is most rapid with the least soluble anesthetics, nitrous oxide and desflurane, and slowest with the most soluble anesthetic, halothane. All data are from human studies. (Reproduced with permission from Eger, 2000.)

In considering the pharmacokinetics of anesthetics, one important parameter is the speed of anesthetic induction. Anesthetic induction requires that brain partial pressure be equal to MAC. Because the brain is well perfused, anesthetic partial pressure in brain becomes equal to the partial pressure in alveolar gas (and in blood) over the course of several minutes. Therefore, anesthesia is achieved shortly after alveolar partial pressure reaches MAC. While the rate of rise of alveolar partial pressure will be slower for anesthetics that are highly soluble in blood and other tissues, this limitation on speed of induction can be overcome largely by delivering higher inspired partial pressures of the anesthetic. Elimination of inhalational anesthetics is largely the reverse process of uptake. For agents with low blood and tissue solubility, recovery from anesthesia should mirror anesthetic induction, regardless of the duration of anesthetic administration. For inhalational agents with high blood and tissue solubility, recovery will be a function of the duration of anesthetic administration. This occurs because the accumulated amounts of anesthetic in the fat reservoir will prevent blood (and therefore alveolar) partial pressures from falling rapidly. Patients will be arousable when alveolar partial pressure reaches MACawake , a partial pressure somewhat lower than MAC (see Table 141). Halothane Chemistry and Formulation Halothane (FLUOTHANE) is 2-bromo-2-chloro-1,1,1-trifluoroethane (see Figure 144). Halothane is a volatile liquid at room temperature and must be stored in a sealed container. Because halothane is a light-sensitive compound that also is subject to spontaneous breakdown, it is marketed in amber bottles with thymol added as a preservative. Mixtures of halothane with oxygen or air are neither flammable nor explosive.

Pharmacokinetic Halothane has a relatively high blood:gas partition coefficient and high blood:fat partition coefficient (see Table 141). Induction with halothane therefore is relatively slow, and the alveolar halothane concentration remains substantially lower than the inspired halothane concentration for many hours of administration. Because halothane is soluble in fat and other body tissues, it will accumulate during prolonged administration. Therefore, the speed of recovery from halothane is lengthened as a function of duration of administration (Stoelting and Eger, 1969). Approximately 60% to 80% of halothane taken up by the body is eliminated unchanged via the lungs in the first 24 hours after its administration. A substantial amount of the halothane not eliminated in exhaled gas is biotransformed in the liver by cytochrome P450 enzymes. The major metabolite of halothane is trifluoroacetic acid, which is formed by removal of bromine and chlorine ions (Gruenke et al., 1988). Trifluoroacetic acid, bromine, and chlorine all can be detected in the urine. Trifluoroacetylchloride, an intermediate in oxidative metabolism of halothane, can trifluoroacetylate covalently several proteins in the liver. An immune reaction to these altered proteins may be responsible for the rare cases of fulminant halothane-induced hepatic necrosis (Kenna et al., 1988). There also is a minor reductive pathway accounting for approximately 1% of halothane metabolism and generally observed only under hypoxic conditions (Van Dyke et al., 1988). Clinical Use Halothane, introduced in 1956, was the first of the modern, halogenated inhalational anesthetics used in clinical practice. It is a potent agent that usually is used for maintenance of anesthesia. It is not pungent and is therefore well tolerated for inhalation induction of anesthesia. This is most commonly done in children, where preoperative placement of an intravenous catheter can be difficult. Anesthesia is produced by halothane at end-tidal concentrations of 0.7% to 1.0% halothane. The end-tidal concentration of halothane required to produce anesthesia is substantially reduced when it is coadministered with nitrous oxide. The use of halothane in the United States has diminished substantially in the past decade because of the introduction of newer inhalational agents with better pharmacokinetic and side-effect profiles. Halothane continues to be extensively used in children because it is well tolerated for inhalation induction and because the serious side effects appear to be diminished in children. Halothane has a low cost and is therefore still widely used in developing countries. Side Effects Cardiovascular System The most predictable side effect of halothane is a dose-dependent reduction in arterial blood pressure. Mean arterial pressure decreases about 20% to 25% at MAC concentrations of halothane. This reduction in blood pressure primarily is the result of direct myocardial depression leading to reduced cardiac output (see Figure 146). Myocardial depression is thought to result from attenuation of depolarization-induced intracellular calcium transients (Lynch, 1997). Halothaneinduced hypotension usually is accompanied by either bradycardia or a normal heart rate. This absence of a tachycardic (or contractile) response to reduced blood pressure is thought to be due to an inability of the heart to respond to the effector arm of the baroceptor reflex. Heart rate can be increased during halothane anesthesia by exogenous catecholamine or by sympathoadrenal stimulation. Halothane-induced reductions in blood pressure and heart rate generally disappear after several hours of constant halothane administration. This is thought to occur because of progressive

sympathetic stimulation (Eger et al., 1970). Figure 146. Influence of Inhalational General Anesthetics on the Systemic Circulation. While all of the inhalational anesthetics reduce systemic blood pressure in a dose-related manner (top), the lower figure shows that cardiac output is well preserved with isoflurane and desflurane and, therefore, that the causes of hypotension vary with the agent. (Data are from human studies except for sevoflurane, where data are from swine: Bahlman et al., 1972; Cromwell et al., 1971; Weiskopf et al., 1991; Calverley et al., 1978; Stevens et al., 1971; Eger et al., 1970; Weiskopf et al., 1988).

Halothane does not cause a significant change in systemic vascular resistance. Nonetheless, it causes changes in the resistance and autoregulation of specific vascular beds leading to redistribution of blood flow. The vascular beds of the skin and brain are dilated directly by halothane, leading to increased cerebral blood flow and skin perfusion. Conversely, autoregulation of renal, splanchnic, and cerebral blood flow is inhibited by halothane, leading to reduced perfusion of these organs in the face of reduced blood pressure. Coronary autoregulation is largely preserved during halothane anesthesia. Finally, halothane does inhibit hypoxic pulmonary vasoconstriction,

alveolar:arterial oxygen gradient. Halothane also has significant effects on cardiac rhythm. Sinus bradycardia and atrioventricular rhythms occur frequently during halothane anesthesia but are usually benign. These rhythms result mainly from a direct depressive effect of halothane on sinoatrial node discharge. Halothane also can sensitize the myocardium to the arrythmogenic effects of epinephrine (Sumikawa et al., 1983). Premature ventricular contractions and sustained ventricular tachycardia can be observed during halothane anesthesia when exogenous administration or endogenous adrenal production elevates plasma epinephrine levels. Epinephrine-induced arrhythmias during halothane anesthesia are thought to be mediated by a synergistic effect on 1- and 1-adrenergic receptors (Hayashi et al., 1988). Respiratory System Spontaneous respiration is rapid and shallow during halothane anesthesia. This produces a decrease in alveolar ventilation resulting in an elevation in arterial carbon dioxide tension from 40 mm Hg to >50 mm Hg at 1 MAC (see Figure 147). The elevated carbon dioxide does not provoke a compensatory increase in ventilation, because halothane causes a concentration-dependent inhibition of the ventilatory response to carbon dioxide (Knill and Gelb, 1978). This action of halothane is thought to be mediated by depression of central chemoceptor mechanisms. Halothane also inhibits peripheral chemoceptor responses to arterial hypoxemia. Thus, neither hemodynamic (tachycardia, hypertension) nor ventilatory responses to hypoxemia are observed during halothane anesthesia, making it prudent to monitor arterial oxygenation directly. Halothane also is an effective bronchodilator, producing direct relaxation of bronchial smooth muscle (Yamakage, 1992) and has been effectively used as a treatment of last resort in patients with status asthmaticus (Gold and Helrich, 1970). Figure 147. Respiratory Effects of Inhalational Anesthetics. Spontaneous ventilation with all of the halogenated inhalational anesthetics reduces minute volume of ventilation in a dose-dependent manner (lower panel). This results in an increased arterial carbon dioxide tension (top panel). Differences among agents are modest. (Data are from Doi and Ikeda, 1987; Lockhart et al., 1991; Munson et al., 1966; Calverley et al., 1978; Fourcade et al., 1971.)

Nervous System Halothane dilates the cerebral vasculature, increasing cerebral blood flow under most conditions. This increase in blood flow can increase intracranial pressure in patients with space-occupying intracranial masses, brain edema, or preexisting intracranial hypertension. For this reason, halothane is relatively contraindicated in patients at risk for elevated intracranial pressure. Halothane also attenuates autoregulation of cerebral blood flow. For this reason, cerebral blood flow can decrease when arterial blood pressure is markedly decreased. Modest decreases in cerebral blood flow generally are well tolerated, because halothane also reduces cerebral metabolic consumption of oxygen. Muscle Halothane causes some relaxation of skeletal muscle via its central-depressant effects. Halothane also potentiates the actions of nondepolarizing muscle relaxants (curariform drugs; see Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia), increasing both their duration of action and the magnitude of their effect. Halothane also is one of the triggering agents for malignant hyperthermia, a syndrome characterized by severe muscle contraction, rapid development of hyperthermia, and a massive increase in metabolic rate in genetically susceptible patients. This syndrome frequently is fatal and is treated by immediate discontinuation of the anesthetic and administration of dantrolene.

Uterine smooth muscle is relaxed by halothane. This is a useful property for manipulation of the fetus (version) in the prenatal period and for delivery of retained placenta postnatally. Halothane, however, does inhibit uterine contractions during parturition, prolonging labor and increasing blood loss. Halothane therefore is not used as an analgesic or anesthetic for labor and vaginal delivery. Kidney Patients anesthetized with halothane usually produce a small volume of concentrated urine. This is the consequence of halothane-induced reduction of renal blood flow and glomerular filtration rate; these parameters may be reduced by 40% to 50% at 1 MAC. (Mazze et al., 1963). Halothaneinduced changes in renal function are fully reversible and are not associated with long-term nephrotoxicity. Liver and Gastrointestinal Tract Halothane reduces splanchnic and hepatic blood flow as a consequence of reduced perfusion pressure, as discussed above. This reduced blood flow has not been shown to produce detrimental effects on hepatic or gastrointestinal function. Halothane can produce fulminant hepatic necrosis in a small number of patients. This syndrome generally is characterized by fever, anorexia, nausea, and vomiting developing several days after anesthesia and can be accompanied by a rash and peripheral eosinophilia. There is a rapid progression to hepatic failure, with a fatality rate of approximately 50%. This syndrome occurs in about 1 in 10,000 patients receiving halothane and is referred to as halothane hepatitis (Subcommittee on the National Halothane Study, 1966). Current thinking is that halothane hepatitis is the result of an immune response to trifluoracetylated proteins on hepatocytes (see"Pharmacokinetics," above). Isoflurane Chemistry and Physical Properties Isoflurane (FORANE) is 1-chloro-2,2,2-trifluoroethyl difluoromethyl ether (see Figure 144). It is a volatile liquid at room temperature and is neither flammable nor explosive in mixtures of air or oxygen. Pharmacokinetics Isoflurane has a blood:gas partition coefficient substantially lower than that of halothane or enflurane (see Table 141). Consequently, induction with isoflurane and recovery from isoflurane are relatively rapid. Changes in anesthetic depth also can be achieved more rapidly with isoflurane than with halothane or enflurane. More than 99% of inhaled isoflurane is excreted unchanged via the lungs. Approximately 0.2% of absorbed isoflurane is oxidatively metabolized by cytochrome P450 2E1 (Kharasch et al., 1993). The small amount of isoflurane degradation products produced are insufficient to produce any renal, hepatic, or other organ toxicity. Isoflurane does not appear to be a mutagen, teratogen, or carcinogen (Eger et al., 1978). Clinical Use Isoflurane is the most commonly used inhalational anesthetic in the United States. Induction of anesthesia can be achieved in less than 10 minutes with an inhaled concentration of 3% isoflurane

in oxygen; this concentration is reduced to 1.5% to 2.5% for maintenance of anesthesia. The use of other drugs such as opioids or nitrous oxide reduces the concentration of isoflurane required for surgical anesthesia. Side Effects Cardiovascular System Isoflurane produces a concentration-dependent decrease in arterial blood pressure. Unlike halothane, cardiac output is well maintained with isoflurane, and hypotension is the result of decreased systemic vascular resistance (see Figure 146). Isoflurane produces vasodilation in most vascular beds, with particularly pronounced effects in skin and muscle. Isoflurane is a potent coronary vasodilator, simultaneously producing increased coronary blood flow and decreased myocardial oxygen consumption. In theory, this makes isoflurane a particularly safe anesthetic to use for patients with ischemic heart disease. However, concern has been raised that isoflurane may produce myocardial ischemia by inducing "coronary steal" (i.e., the diversion of blood flow from poorly perfused to well-perfused areas) (Buffington et al., 1988). This concern has not been substantiated in subsequent animal and human studies. Patients anesthetized with isoflurane generally have mildly elevated heart rates, and rapid changes in isoflurane concentration can produce transient tachycardia and hypertension. This is the result of direct isoflurane-induced sympathetic stimulation. Respiratory System Isoflurane produces concentration-dependent depression of ventilation. Patients spontaneously breathing isoflurane have a normal rate of respiration but a reduced tidal volume, resulting in a marked reduction in alveolar ventilation and an increase in arterial carbon dioxide tension (see Figure 147). Isoflurane is particularly effective at depressing the ventilatory response to hypercapnia and hypoxia (Hirshman et al., 1977). While isoflurane is an effective bronchodilator, it also is an airway irritant and can stimulate airway reflexes during induction of anesthesia, producing coughing and laryngospasm. Nervous System Isoflurane, like halothane, dilates the cerebral vasculature, producing increased cerebral blood flow and the risk of increased intracranial pressure. Isoflurane also reduces cerebral metabolic oxygen consumption. Isoflurane causes less cerebral vasodilation than do either enflurane or halothane, making it a preferred agent for neurosurgical procedures (Drummond et al., 1983). The modest effects of isoflurane on cerebral blood flow can be reversed readily by hyperventilation (McPherson et al., 1989). Muscle Isoflurane produces some relaxation of skeletal muscle via its central effects. It also enhances the effects of both depolarizing and nondepolarizing muscle relaxants. Isoflurane is more potent than halothane in its potentiation of neuromuscular blocking agents. Isoflurane, like other halogenated inhalational anesthetics, relaxes uterine smooth muscle and is not recommended for analgesia or anesthesia for labor and vaginal delivery. Kidney

Isoflurane reduces renal blood flow and glomerular filtration rate. This results in a small volume of concentrated urine. Changes in renal function observed during isoflurane anesthesia are rapidly reversed, and there are no long-term renal sequelae or toxicity associated with isoflurane. Liver and Gastrointestinal Tract Splanchnic (and hepatic) blood flow is reduced with increasing doses of isoflurane, as systemic arterial pressure decreases. Liver function tests are minimally affected by isoflurane, and there is no described incidence of hepatic toxicity with isoflurane. Enflurane Chemical and Physical Properties Enflurane (ETHRANE ) is 2-chloro-1,1,2-trifluoroethyl difluoromethyl ether (see Figure 144). It is a clear colorless liquid at room temperature with a mild, sweet odor. Like other inhalational anesthetics, it is volatile and must be stored in a sealed bottle. It is nonflammable and nonexplosive in mixtures of air or oxygen. Pharmacokinetics Because of its relatively high blood:gas partition coefficient, induction of anesthesia and recovery from enflurane are relatively slow (see Table 141). Enflurane is metabolized to a modest extent, with 2% to 8% of absorbed enflurane undergoing oxidative metabolism in the liver by cytochrome P450 2E1 (Kharasch et al., 1994). Fluoride ions are a by-product of enflurane metabolism, but plasma fluoride levels are low and nontoxic. Patients taking isoniazid exhibit enhanced metabolism of enflurane with significantly elevated serum fluoride concentrations (Mazze et al., 1982). Clinical Use Surgical anesthesia can be induced with enflurane in less than 10 minutes with an inhaled concentration of 4% in oxygen. Anesthesia can be maintained with concentrations from 1.5% to 3%. As with other anesthetics, the enflurane concentrations required to produce anesthesia are reduced when it is coadministered with nitrous oxide or opioids. Use of enflurane has decreased substantially in recent years with the introduction of newer inhalational agents with preferable pharmacokinetic and side-effect profiles. Side Effects Cardiovascular System Enflurane causes a concentration-dependent decrease in arterial blood pressure. Hypotension is due, in part, to depression of myocardial contractility with some contribution from peripheral vasodilation (see Figure 146). Enflurane has minimal effects on heart rate and produces neither the bradycardia seen with halothane nor the tachycardia seen with isoflurane. Respiratory System The respiratory effects of enflurane are similar to those of halothane. Spontaneous ventilation with enflurane produces a pattern of rapid, shallow breathing. Minute ventilation is markedly decreased, and a PaCO2 of 60 mm Hg is seen with 1 MAC of enflurane (see Figure 147). Enflurane produces a

greater depression of the ventilatory responses to hypoxia and hypercarbia than do either halothane or isoflurane (Hirshman et al., 1977). Enflurane, like other inhalational anesthetics, is an effective bronchodilator. Nervous System Enflurane is a cerebral vasodilator and thus can increase intracranial pressure in some patients. Like other inhalational anesthetics, enflurane reduces cerebral metabolic oxygen consumption. Enflurane has an unusual property of producing electrical seizure activity. High concentrations of enflurane or profound hypocarbia during enflurane anesthesia result in a characteristic high-voltage, highfrequency electroencephalographic (EEG) pattern, which progresses to spike-and-dome complexes. The spike-and-dome pattern can be punctuated by frank seizure activity, which may or may not be accompanied by peripheral motor manifestations of seizure activity. The seizures are self-limited and are not thought to produce permanent damage. Enflurane is not thought to precipitate seizures in epileptic patients. Nonetheless, enflurane is generally not used in patients with seizure disorders. Muscle Enflurane produces significant skeletal muscle relaxation in the absence of muscle relaxants. It also significantly enhances the effects of nondepolarizing muscle relaxants. As with other inhalational agents, enflurane relaxes uterine smooth muscle. It thus is not widely used for obstetrical anesthesia. Kidney Like other inhalational anesthetics, enflurane reduces renal blood flow, glomerular filtration rate, and urinary output. These effects are rapidly reversed with discontinuation of the drug. Enflurane metabolism produces significant plasma levels of fluoride ions (20 to 40 M) and can produce transient urinary-concentrating defects following prolonged administration (Mazze et al., 1977). There is scant evidence of long-term nephrotoxicity following enflurane use, and it is safe to use in patients with renal impairment, provided that the depth of enflurane anesthesia and the duration of administration are not excessive. Liver and Gastrointestinal Tract Enflurane reduces splanchnic and hepatic blood flow in proportion to reduced arterial blood pressure. Enflurane does not appear to alter liver function or to be hepatoxic. Desflurane Chemistry and Physical Properties Desflurane (SUPRANE ) is difluoromethyl 1-fluoro-2,2,2-trifluoromethyl ether (see Figure 144). It is a highly volatile liquid at room temperature (vapor pressure = 681 mm Hg) and thus must be stored in tightly sealed bottles. Delivery of a precise concentration of desflurane requires the use of a specially heated vaporizer that delivers pure vapor that is then diluted appropriately with other gases (oxygen, air, nitrous oxide). Desflurane is nonflammable and nonexplosive in mixtures of air or oxygen. Pharmacokinetics

Desflurane has a very low blood:gas partition coefficient (0.42) and also is not very soluble in fat or other peripheral tissues (see Table 141). For this reason, the alveolar (and blood) concentration rapidly rises to the level of inspired concentration. Indeed, within five minutes of administration, the alveolar concentration reaches 80% of the inspired concentration. This provides for a very rapid induction of anesthesia and for rapid changes in depth of anesthesia following changes in the inspired concentration. Emergence from anesthesia also is very rapid with desflurane. The time to awakening following desflurane is half as long as with halothane or sevoflurane and usually does not exceed 5 to 10 minutes (Smiley et al., 1991). Desflurane is metabolized to a minimal extent, and more than 99% of absorbed desflurane is eliminated unchanged via the lungs. A small amount of absorbed desflurane is oxidatively metabolized by hepatic cytochrome P450 enzymes. Virtually no serum fluoride ions are detectable in serum after desflurane administration, but low concentrations of trifluoroacetic acid are detectable in serum and urine (Koblin et al., 1988). Clinical Use Desflurane is a widely used anesthetic for outpatient surgery because of its rapid onset of action and rapid recovery. Desflurane is irritating to the airway in awake patients and can provoke coughing, salivation, and bronchospasm. Anesthesia therefore usually is induced with an intravenous agent, with desflurane subsequently administered for maintenance of anesthesia. Maintenance of anesthesia usually requires inhaled concentrations of 6% to 8%. Lower concentrations of desflurane are required if it is coadministered with nitrous oxide or opioids. Side Effects Cardiovascular System Desflurane, like all inhalational anesthetics, causes a concentration-dependent decrease in blood pressure. Desflurane has a very modest negative inotropic effect and produces hypotension primarily by decreasing systemic vascular resistance (Eger, 1994) (see Figure 146). Cardiac output thus is well preserved during desflurane anesthesia, as is blood flow to the major organ beds (splanchnic, renal, cerebral, coronary). Marked increases in heart rate often are noted during induction of desflurane anesthesia and during abrupt increases in the delivered concentration of desflurane. This tachycardia is transient and is the result of desflurane-induced stimulation of the sympathetic nervous system (Ebert and Muzi, 1993). While the hypotensive effects of some inhalational anesthetics are attenuated as a function of duration of administration, this is not the case with desflurane (Weiskopf et al., 1991). Respiratory System Similar to halothane and enflurane, desflurane causes a concentration-dependent increase in respiratory rate and a decrease in tidal volume. At low concentrations (less than 1 MAC) the net effect is to preserve minute ventilation. At desflurane concentrations greater than 1 MAC, minute ventilation is markedly depressed, resulting in elevated arterial carbon dioxide tension (see Figure 147) (Lockhart et al., 1991). Patients spontaneously breathing desflurane at concentrationsgreater than 1.5 MAC will have extreme elevations of arterial carbon dioxide tension and may become apneic. Desflurane, like other inhalational agents, is a bronchodilator. It is also a strong airway irritant, however, and can cause coughing, breath-holding, laryngospasm, and excessive respiratory secretions. Because of its irritant properties, desflurane is not used for induction of anesthesia.

Nervous System Desflurane decreases cerebral vascular resistance and cerebral metabolic oxygen consumption. Under conditions of normocapnia and normotension, desflurane produces an increase in cerebral blood flow and can increase intracranial pressure in patients with poor intracranial compliance. The vasoconstrictive response to hypocapnia is preserved during desflurane anesthesia, and increases in intracranial pressure thus can be prevented by hyperventilation. Muscle Desflurane produces direct skeletal muscle relaxation as well as enhancing the effects of nondepolarizing and depolarizing neuromuscular blocking agents (Caldwell et al., 1991). Kidney Desflurane has no reported nephrotoxicity. This is consistent with its minimal metabolic degradation. Liver and Gastrointestinal Tract Desflurane is not known to affect liver function tests or to cause hepatotoxicity. Sevoflurane Chemistry and Physical Properties Sevoflurane (ULTANE ) is fluoromethyl 2,2,2-trifluoro-1-[trifluoromethyl]ethyl ether (see Figure 14 4). It is a clear, colorless, volatile liquid at room temperature and must be stored in a sealed bottle. It is nonflammable and nonexplosive in mixtures of air or oxygen. Pharmacokinetics The low solubility of sevoflurane in blood and other tissues provides for rapid induction of anesthesia, rapid changes in anesthetic depth following changes in delivered concentration, and rapid emergence following discontinuation of administration (see Table 141). Approximately 3% of absorbed sevoflurane is biotransformed. Sevoflurane is metabolized in the liver by cytochrome P450 2E1, with the predominant product being hexafluoroisopropanol (Kharasch et al., 1995). Hepatic metabolism of sevoflurane also produces inorganic fluoride. Serum fluoride concentrations reach a peak shortly after surgery and decline rapidly. Interaction of sevoflurane with soda lime also produces decomposition products. The major product of interest is referred to as compound A and is pentafluoroisopropenyl fluoromethyl ether (see"Side Effects""Kidney," below) (Hanaki et al., 1987). Clinical Use Sevoflurane has been widely used in Japan for a number of years and is enjoying increasing use in the United States. Sevoflurane is widely used for outpatient anesthesia because of its rapid recovery profile. It also is a useful drug for inhalation induction of anesthesia (particularly in children), because it is not irritating to the airway. Induction of anesthesia is rapidly achieved using inhaled concentrations of 2% to 4% sevoflurane.

Side Effects Cardiovascular System Sevoflurane, like all other halogenated inhalational anesthetics, produces a concentration-dependent decrease in arterial blood pressure. This hypotensive effect primarily is due to systemic vasodilation, although sevoflurane also produces a concentration-dependent decrease in cardiac output (see Figure 146). Unlike isoflurane or desflurane, sevoflurane does not produce tachycardia and thus may be a preferable agent in patients prone to myocardial ischemia. Respiratory System Sevoflurane produces a concen-tration-dependent reduction in tidal volume and increase in respiratory rate in spontaneously breathing patients. The increased respiratory frequency is not adequate to compensate for reduced tidal volume, with the net effect being a reduction in minute ventilation and an increase in arterial carbon dioxide tension (Doi and Ikeda, 1987) (see Figure 14 7). Sevoflurane is not irritating to the airway and is a potent bronchodilator. Because of this combination of properties, sevoflurane is the most effective clinical bronchodilator of the inhalational anesthetics (Rooke et al., 1997). Nervous System Sevoflurane produces effects on cerebral vascular resistance, cerebral metabolic oxygen consumption, and cerebral blood flow that are very similar to those produced by isoflurane and desflurane. While sevoflurane thus can increase intracranial pressure in patients with poor intracranial compliance, the response to hypocapnia is preserved during sevoflurane anesthesia, and increases in intracranial pressure thus can be prevented by hyperventilation. Muscle Sevoflurane produces direct skeletal muscle relaxation as well as enhancing the effects of nondepolarizing and depolarizing neuromuscular blocking agents. Its effects are similar to those of other halogenated inhalational anesthetics. Kidney Controversy has surrounded the potential nephrotoxicity of compound A, the degradation product produced by interaction of sevoflurane with the carbon dioxide absorbant soda lime. There has been a report showing transient biochemical evidence of renal injury in studies with human volunteers but no evidence of permanent renal injury (Eger et al., 1997). Large clinical studies have showed no evidence of increased serum creatinine, blood urea nitrogen, or any other evidence of renal impairment following sevoflurane administration (Mazze et al., 2000). The current recommendation of the U.S. Food and Drug Administration is that sevoflurane be administered with fresh gas flows of at least 2 liters/minute to minimize accumulation of compound A. Liver and Gastrointestinal Tract Sevoflurane is not known to cause hepatotoxicity or alterations of hepatic function tests. Nitrous Oxide

Chemical and Physical Properties Nitrous oxide (dinitrogen monoxide; N2O) is a colorless, odorless gas at room temperature (see Figure 144). It is sold in steel cylinders and must be delivered through calibrated flow meters provided on all anesthesia machines. Nitrous oxide is neither flammable nor explosive, but it does support combustion as actively as oxygen does when it is present in proper concentration with a flammable anesthetic or material. Pharmacokinetics Nitrous oxide is very insoluble in blood and other tissues (see Table 141). This results in rapid equilibration between delivered and alveolar anesthetic concentrations and provides for rapid induction of anesthesia and rapid emergence following discontinuation of administration. The rapid uptake of nitrous oxide from alveolar gas serves to concentrate coadministered halogenated anesthetics; this effect (the "second gas effect") speeds induction of anesthesia. On discontinuation of nitrous oxide administration, nitrous oxide gas can diffuse from blood to the alveoli, diluting oxygen in the lung. This can produce an effect called diffusional hypoxia. To avoid hypoxia, 100% oxygen rather than air should be administered when nitrous oxide is discontinued. Nitrous oxide is almost completely eliminated by the lungs, with some minimal diffusion through the skin. Nitrous oxide is not biotransformed by enzymatic action in human tissue, and 99.9% of absorbed nitrous oxide is eliminated unchanged. Nitrous oxide can be degraded by interaction with vitamin B12 in intestinal bacteria. This results in inactivation of methionine synthesis and can produce signs of vitamin B12 deficiency (megaloblastic anemia, peripheral neuropathy) following long-term nitrous oxide administration (O'Sullivan et al., 1981). For this reason, nitrous oxide is not used as a chronic analgesic or as a sedative in critical care settings. Clinical Use Nitrous oxide is a weak anesthetic agent and produces reliable surgical anesthesia only under hyperbaric conditions. It does produce significant analgesia at concentrations as low as 20% and usually produces sedation in concentrations between 30% and 80%. It is used frequently in concentrations of approximately 50% to provide analgesia and sedation in outpatient dentistry. Nitrous oxide cannot be used at concentrations above 80%, because this limits the delivery of an adequate amount of oxygen. Because of this limitation, nitrous oxide is used primarily as an adjunct to other inhalational or intravenous anesthetics. Nitrous oxide substantially reduces the requirement for inhalational anesthetics. For example, at 70% nitrous oxide, MAC for other inhalational agents is reduced by about 60%, allowing for lower concentrations of halogenated anesthetics and a lesser degree of side effects. One major problem with nitrous oxide is that it will exchange with nitrogen in any air-containing cavity in the body. Moreover, nitrous oxide will enter the cavity faster than nitrogen escapes, thereby increasing the volume and/or pressure in this cavity. Examples of air collections that can be expanded by nitrous oxide include a pneumothorax, an obstructed middle ear, an air embolus, an obstructed loop of bowel, an intraocular air bubble, a pulmonary bulla, and intracranial air. Nitrous oxide should be avoided in these clinical settings. Side Effects Cardiovascular System

Although nitrous oxide produces a negative inotropic effect on heart muscle in vitro, depressant effects on cardiac function generally are not observed in patients. This is because of the stimulatory effects of nitrous oxide on the sympathetic nervous system. The cardiovascular effects of nitrous oxide also are heavily influenced by the concomitant administration of other anesthetic agents. When nitrous oxide is coadministered with halogenated inhalational anesthetics, it generally produces an increase in heart rate, arterial blood pressure, and cardiac output. In contrast, when nitrous oxide is coadministered with an opioid, it generally decreases arterial blood pressure and cardiac output. Nitrous oxide also increases venous tone in both the peripheral and pulmonary vasculature. The effects of nitrous oxide on pulmonary vascular resistance can be exaggerated in patients with preexisting pulmonary hypertension (Schulte-Sasse et al., 1982). Nitrous oxide, therefore, is not generally used in patients with pulmonary hypertension. Respiratory System Nitrous oxide causes modest increases in respiratory rate and decreases in tidal volume in spontaneously breathing patients. The net effect is that minute ventilation is not significantly changed and arterial carbon dioxide tension remains normal. However, even modest concentrations of nitrous oxide markedly depress the ventilatory response to hypoxia (Yacoub et al., 1975). Thus it is prudent to monitor arterial oxygen saturation directly in patients receiving or recovering from nitrous oxide. Nervous System When nitrous oxide is administered alone, it can produce significant increases in cerebral blood flow and intracranial pressure. When nitrous oxide is coadministered with intravenous anesthetic agents, increases in cerebral blood flow are attenuated or abolished. When nitrous oxide is added to a halogenated inhalational anesthetic, its vasodilatory effect on the cerebral vasculature is slightly reduced. Muscle Nitrous oxide does not relax skeletal muscle and does not enhance the effects of neuromuscular blocking drugs. Unlike the halogenated anesthetics, nitrous oxide is not a triggering agent for malignant hyperthermia. Kidney, Liver, and Gastrointestinal Tract Nitrous oxide is not known to produce any changes in renal or hepatic function and is neither nephrotoxic nor hepatotoxic. Xenon Xenon is an inert gas that was first identified as an anesthetic agent in 1951 (Cullen and Gross, 1951). It is not approved for use in the United States and is unlikely to enjoy widespread use, because it is a rare gas that cannot be manufactured and must be extracted from air. This limits the quantities of available xenon gas and renders xenon a very expensive agent. Despite these shortcomings, xenon has properties that make it a virtually ideal anesthetic gas that ultimately may be used in critical situations (Lynch et al., 2000). Xenon is extremely insoluble in blood and other tissues, providing for rapid induction and emergence from anesthesia (see Table 141). It is sufficiently potent to produce surgical anesthesia

when administered with 30% oxygen. Most importantly, xenon has minimal side effects. It has no effects on cardiac output or cardiac rhythm and is not thought to have a significant effect on systemic vascular resistance. It also does not affect pulmonary function and is not known to have any hepatic or renal toxicity. Finally, xenon is not metabolized at all in the human body. Xenon is an anesthetic that may be available in the future if limitations on its availability and its high cost can be overcome.

*Anesthetic Adjuncts
General anesthetics rarely are given as the sole agent. Rather, anesthetic adjuncts usually are used to augment specific components of anesthesia, permitting lower doses of general anesthetics with fewer side effects. Because they are such an integral part of general anesthetic drug regimens, why and how they are utilized as anesthetic adjuncts will be described briefly here. The detailed pharmacology of each drug is covered in other chapters. Benzodiazepines Benzodiazepines (see Chapter 17: Hypnotics and Sedatives) are commonly used for sedation rather than general anesthesia because of the prolonged amnesia and sedation that may result from anesthetizing doses. As adjuncts, benzodiazepines are used for anxiolysis, amnesia, and sedation prior to induction of anesthesia or for sedation during procedures not requiring general anesthesia. The benzodiazepine most frequently used in the perioperative period is midazolam (VERSED ) followed distantly by diazepam (VALIUM ) and lorazepam (ATIVAN). Midazolam is water soluble and is typically administered intravenously but also can be given orally, intramuscularly, or rectally; oral midazolam is particularly useful for sedation of young children. Midazolam produces minimal venous irritation as opposed to diazepam and lorazepam, which are formulated in propylene glycol and are painful on injection, sometimes producing thrombophlebitis. Midazolam has the pharmacokinetic advantage, particularly over lorazepam, of being more rapid in onset and shorter in duration of effect. Sedative doses of midazolam (0.01 to 0.07 mg/kg intravenously) reach peak effect in about 2 minutes and provide sedation for about 30 minutes (Reves et al., 1985). Elderly patients tend to be more sensitive to and have a slower recovery from benzodiazepines (Jacobs et al., 1995); thus, titration of smaller doses in this age group is prudent. Midazolam is hepatically metabolized with a clearance (6 to 11 ml/min per kg) similar to that of methohexital and about 20 and 7 times higher than those of diazepam and lorazepam, respectively (Greenblatt et al., 1981; Reves et al., 1985). Either for prolonged sedation or for general anesthetic maintenance, midazolam is more suitable for infusion than are other benzodiazepines, although its duration of action does significantly increase with prolonged infusions (Figure 143). Benzodiazepines reduce both cerebral blood flow and metabolism but at equianesthetic doses are less potent in this respect than are barbiturates. They are effective anticonvulsants and are sometimes given to treat status epilepticus (Modica et al., 1990). Benzodiazepines modestly decrease blood pressure and respiratory drive, occasionally resulting in apnea (Reves et al., 1985). Thus, blood pressure and respiratory rate should be monitored in patients sedated with intravenous benzodiazepines. Analgesics With the exception of ketamine, neither parenteral nor currently available inhalational anesthetics are effective analgesics. Thus, analgesics typically are administered with general anesthetics to reduce anesthetic requirement and minimize hemodynamic changes produced by painful stimuli. Nonsteroidal antiinflammatory drugs, including cyclo-oxygenase-2 inhibitors or acetaminophen,

sometimes provide adequate analgesia for minor surgical procedures. However, because of the rapid and profound analgesia produced, opioids are the primary analgesics used during the perioperative period. Fentanyl (SUBLIMAZE), sufentanil (SUFENTA), alfentanil (ALFENTA), remifentanil (ULTIVA), meperidine (DEMEROL), and morphine are the major parenteral opioids used in the perioperative period. The primary analgesic activity of each of these drugs is produced by agonist activity at opioid receptors (Pasternak, 1993). Their order of potency (relative to morphine) is: sufentanil (1000x) > remifentanil (300x) > fentanyl (100x) > alfentanil (15x) > morphine (1x) > meperidine (0.1x) (Clotz and Nahata, 1991; Glass et al., 1993; Martin, 1983). Pharmacological properties of these agents are discussed in more detail in Chapter 23: Opioid Analgesics. The choice of a perioperative opioid is based primarily on duration of action, given that, at appropriate doses, all produce similar analgesia and side effects. Remifentanil has an ultra-short duration of action ( 10 min) and minimally accumulates with repeated doses or infusion (Glass et al., 1993); it is particularly well suited for procedures that are briefly painful but for which little analgesia is required postoperatively. Single doses of fentanyl, alfentanil, and sufentanil all have similar intermediate durations of action (30, 20, and 15 minutes, respectively), but as for general anesthetics, recovery after prolonged administration varies considerably (Shafer and Varvel, 1991). Fentanyl's duration of action lengthens most with infusion, sufentanil's much less so, and alfentanil's the least. Except for remifentanil, all of the above-mentioned opioids are metabolized in the liver followed by renal and biliary excretion of the metabolites (Tegeder et al., 1999). Remifentanil is hydrolyzed by tissue and plasma esterases (Westmoreland et al., 1993). After prolonged administration, morphine metabolites have significant analgesic and hypnotic activity (Christrup, 1997). During the perioperative period, opioids often are given at induction to preempt responses to predictable painful stimuli (e.g., endotracheal intubation and surgical incision). Subsequent doses either by bolus or infusion are titrated to the surgical stimulus and the patient's hemodynamic response. Marked decreases in respiratory rate and heart rate with much smaller reductions in blood pressure are seen to varying degrees with all opioids (Bowdle, 1998). Muscle rigidity that can impair ventilation sometimes accompanies larger doses of opioids. The incidence of sphincter of Oddi spasm is increased with all opioids, although morphine appears to be more potent in this regard (Hahn et al., 1988; Thune et al., 1990). After emergence from anesthesia, the frequency and severity of nausea, vomiting, and pruritus are increased by all opioids to about the same degree (Watcha and White, 1992). A useful side effect of meperidine is its ability to reduce shivering, a common problem during emergence from anesthesia (Pauca et al., 1984); other opioids are not as efficacious against shivering perhaps due to less -receptor agonist activity. Opioids sometimes are combined with the neuroleptic droperidol (see Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) to produce what is termed neurolept analgesia/anesthesia (analgesia accompanied by a quiescent state with or without loss of consciousness); the addition of 70% nitrous oxide usually is enough to produce anesthesia. Neurolept anesthesia currently is not a commonly used technique and is usually reserved for clinical scenarios where inhalational and/or other parenteral anesthetics are relatively contraindicated. Finally, opioids often are administered intrathecally and epidurally for management of acute and chronic pain. Neuraxial opioids with or without local anesthetics can provide profound analgesia for many surgical procedures; however, respiratory depression and pruritus usually limit their use to major operations. Neuromuscular Blocking Agents The practical aspects of the use of neuromuscular blockers as anesthetic adjuncts are briefly described here. The detailed pharmacology of this drug class is presented in Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia. Depolarizing (e.g., succinylcholine)

and nondepolarizing muscle relaxants (e.g., pancuronium) often are administered during the induction of anesthesia to relax muscles of the jaw, neck, and airway and thereby facilitate laryngoscopy and endotracheal intubation. As mentioned above, barbiturates will precipitate when mixed with muscle relaxants and should be allowed to clear from the intravenous line prior to injection of a muscle relaxant. Following induction, continued muscle relaxation is desirable for many procedures to aid surgical exposure and/or to provide additional insurance of immobility. Of course, muscle relaxants are not by themselves anesthetics and should not be used in lieu of adequate anesthetic depth. The action of nondepolarizing muscle relaxants is usually antagonized, once muscle paralysis is no longer desired, with an acetylcholine esterase inhibitor such as neostigmine or edrophonium (see Chapter 8: Anticholinesterase Agents) combined with a muscarinic receptor antagonist (see Chapter 7: Muscarinic Receptor Agonists and Antagonists) (e.g., glycopyrrolate or atropine to offset the muscarinic activation of the esterase inhibitors). Other than histamine release by some agents, nondepolarizing muscle relaxants have little in the way of side effects. However, succinylcholine has multiple serious side effects (bradycardia, hyperkalemia, severe myalgia) including induction of malignant hyperthermia in susceptible individuals (see Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia).

*Prospectus
Two drug classes, 2-adrenergic receptor agonists and neurosteroids, appear to hold great promise for providing new anesthetics. Currently, the 2-receptor agonist clonidine is being used as an anesthetic adjunct for its sedative and analgesic actions, but its role has been limited by cardiovascular side effects that include bradycardia and hypotension (Hayashi and Maze, 1993). Newer 2-receptor agonists have been under investigation, and recently dexmedetomidine (PRECEDEX ) has been approved in the United States for sedation in the intensive care unit (ICU). When used alone, dexmedetomidine, like clonidine, does not reliably provide general anesthesia. However, dexmedetomidine can reduce the MAC of inhalational anesthetics by as much as 90%, a property referred to as anesthetic sparing (Aho et al., 1992). Thus, 2-receptor agonists allow a reduction in the amount of anesthetic and opioids needed to maintain the anesthetized state, perhaps resulting in a more rapid recovery with fewer side effects. Moreover unlike other anesthetics, 2receptor agonists can be antagonized specifically by agents such as atipamezole (Karhuvaara et al., 1991); this offers the possibility of an immediate and predictable emergence from anesthesia. Sedation of intensive-care-unit patients by dexmedotomidine (0.2 to 0.7 mg/kg per hour) has the advantage of causing less respiratory depression than do opioids and benzodiazepines; however, infusions of longer than 24 hours are not recommended because of the potential for rebound hypertension. For some time, certain steroids have been known to produce sedation and anesthesia. In 1971, the neurosteroid alphaxalone (ALTHESIN) was approved in Europe for use as a parenteral anesthetic but was subsequently withdrawn from the market because of several severe anaphylactic reactions to the cremaphor EL formulation (Moneret-Vautrin et al., 1983; Tachon et al., 1983). No neurosteroid has been approved subsequently for general anesthesia. Nevertheless, neurosteroids have several properties that make them potentially useful parenteral anesthetics, and newer agents have been examined in clinical trials (Gray et al., 1992; Powell et al., 1992). Pregnanolone is capable of providing general anesthesia when used as the sole anesthetic drug and produces minimal pain on injection. It appears to have somewhat less cardiovascular side effects than do barbiturates or propofol (Van Hemelrijck et al., 1994). However, recovery after induction with pregnanolone is significantly slower than after propofol. Developing new neurosteroid agonists and antagonists and understanding their mechanism of action are active areas of research in the quest for an ideal

anesthetic agent (Covey et al., 2000; Wittmer et al., 1996). Acknowledgment The authors wish to acknowledge Drs. Bryan E. Marshall and David E. Longnecker, the authors of this chapter in the ninth edition of Goodman and Gilman's The Pharmacological Basis of Therapeutics, some of whose material has been retained in this edition.

Chapter 15. Local Anesthetics


Overview Local anesthetics prevent or relieve pain by interrupting nerve conduction. They bind to a specific receptor site within the pore of the Na+ channels in nerves and block ion movement through this pore. In general, their action is restricted to the site of application and rapidly reverses upon diffusion from the site of action in the nerve. The chemical and pharmacological properties of each drug determine its clinical use. Local anesthetics can be administered by a variety of routes, including topical, infiltration, field or nerve block, intravenous regional, spinal, or epidural, as dictated by clinical circumstances. This chapter covers the mechanism of action of various local anesthetics, their therapeutic use and routes of administration, and individual side effects. The frequency and voltage-dependence of local anesthetics also are properties of antiarrhythmic agents, discussed in Chapter 35: Antiarrhythmic Drugs. Local Anesthetics: Introduction When applied locally to nerve tissue in appropriate concentrations, local anesthetics reversibly block the action potentials responsible for nerve conduction. They act on any part of the nervous system and on every type of nerve fiber. Thus, a local anesthetic in contact with a nerve trunk can cause both sensory and motor paralysis in the area innervated. The necessary practical advantage of the local anesthetics is that their action is reversible at clinically relevant concentrations; their use is followed by complete recovery in nerve function with no evidence of damage to nerve fibers or cells. History The first local anesthetic, cocaine, was serendipitously discovered to have anesthetic properties in the late nineteenth century. Cocaine occurs in abundance in the leaves of the coca shrub (Erythroxylon coca). For centuries, Andean natives have chewed an alkali extract of these leaves for its stimulatory and euphoric actions. Cocaine was first isolated in 1860 by Albert Niemann. He, like many chemists of that era, tasted his newly isolated compound and noted that it caused a numbing of the tongue. Sigmund Freud studied cocaine's physiological actions, and Carl Koller introduced cocaine into clinical practice in 1884 as a topical anesthetic for ophthalmological surgery. Shortly thereafter, Halstead popularized its use in infiltration and conduction block anesthesia. The many local anesthetics used in clinical practice today all stem from these early observations. Chemistry and StructureActivity Relationship Cocaine is an ester of benzoic acid and the complex alcohol 2-carbomethoxy, 3-hydroxy-tropane (Figure 151). Because of its toxicity and addictive properties (see Chapter 24: Drug Addiction and Drug Abuse), a search for synthetic substitutes for cocaine began in 1892 with the work of Einhorn

and his colleagues. In 1905, this resulted in the synthesis of procaine, which became the prototype for local anesthetics for nearly half a century. The most widely used agents today are procaine, lidocaine, bupivacaine, and tetracaine. Figure 151. Structural Formulas of Selected Local Anesthetics. *Note that chloroprocaine has a chlorine atom in position 2 of the aromatic moiety of procaine.

Figure 151 shows that the structure of typical local anesthetics contains hydrophilic and hydrophobic moieties that are separated by an intermediate ester or amide linkage. A broad range of compounds containing these minimal structural features can satisfy the requirements for action as local anesthetics. The hydrophilic group usually is a tertiary amine, but it also may be a secondary amine; the hydrophobic moiety must be aromatic. The nature of the linking group determines certain of the pharmacological properties of these agents. For example, local anesthetics with an ester link are hydrolyzed readily by plasma esterases. The structureactivity relationship and the physicochemical properties of local anesthetics have been reviewed by Courtney and Strichartz (1987). In brief, hydrophobicity increases both the potency and the duration of action of the local anesthetics. This arises because association of the drug at hydrophobic sites enhances the partitioning of the drug to its sites of action and decreases the rate of metabolism by plasma esterases and liver enzymes. In addition, the receptor site for these drugs on Na+ channels is thought to be hydrophobic (see below), so that receptor affinity for anesthetic agents is increased for more hydrophobic drugs. Hydrophobicity also increases toxicity, so that the therapeutic index actually is decreased for more hydrophobic drugs. Molecular size also influences the rate of dissociation of local anesthetics from their receptor sites

(Courtney and Strichartz, 1987). Smaller drug molecules can escape from the receptor site more rapidly. This characteristic is important in rapidly firing tissues, in which local anesthetics bind during action potentials and dissociate during the period of membrane repolarization. Rapid binding of local anesthetics during action potentials allows the frequency- and voltage-dependence of their action (see below). Mechanism of Action Local anesthetics prevent the generation and the conduction of the nerve impulse. Their primary site of action is the cell membrane. Conduction block can be demonstrated in squid giant axons from which the axoplasm has been removed. Local anesthetics block conduction by decreasing or preventing the large transient increase in the permeability of excitable membranes to Na+ that normally is produced by a slight depolarization of the membrane (see Chapter 12: Neurotransmission and the Central Nervous System and Strichartz and Ritchie, 1987). This action of local anesthetics is due to their direct interaction with voltagegated Na+ channels. As the anesthetic action progressively develops in a nerve, the threshold for electrical excitability gradually increases, the rate of rise of the action potential declines, impulse conduction slows, and the safety factor for conduction decreases. These factors decrease the probability of propagation of the action potential, and nerve conduction eventually fails. In addition to Na+ channels, local anesthetics can bind to other membrane proteins (see Butterworth and Strichartz, 1990). In particular, they can block K+ channels (see Strichartz and Ritchie, 1987). However, since the interaction of local anesthetics with K+ channels requires higher concentrations of drug, blockade of conduction is not accompanied by any large or consistent change in resting membrane potential. Quaternary analogs of local anesthetics block conduction when applied internally to perfused giant axons of squid, but they are relatively ineffective when applied externally. These observations suggest that the site at which local anesthetics act, at least in their charged form, is accessible only from the inner surface of the membrane (Narahashi and Frazier, 1971; Strichartz and Ritchie, 1987). Therefore, local anesthetics applied externally first must cross the membrane before they can exert a blocking action. Although a variety of physicochemical models have been proposed to explain how local anesthetics achieve conduction block (see Courtney and Strichartz, 1987), it now is generally accepted that the major mechanism of action of these drugs involves their interaction with one or more specific binding sites within the Na+ channel (see Butterworth and Strichartz, 1990). Biochemical, biophysical, and molecular biological investigations during the past two decades have led to a rapid expansion of knowledge about the structure and function of the Na+ channel and other voltage-gated ion channels (see Catterall, 2000, and Chapter 12: Neurotransmission and the Central Nervous System). The Na+ channels of the mammalian brain are heterotrimeric complexes of glycosylated proteins with an aggregate molecular size in excess of 300,000 daltons; the individual subunits are designated (260,000 daltons), 1 (36,000 daltons), and 2 (33,000 daltons). The large subunit of the Na+ channel contains four homologous domains (I to IV); each domain is thought to consist of six transmembrane segments in -helical conformation (S1 to S6; see Figure 152) and an additional, membrane-reentrant pore loop. The Na+-selective transmembrane pore of the channel is presumed to reside in the center of a nearly symmetrical structure formed by the four homologous domains. The voltage dependence of channel opening is hypothesized to reflect conformational changes that result from the movement of "gating charges" (voltage sensors) in response to changes in the transmembrane potential. The gating charges are located in the S4 transmembrane helix; the

S4 helices are both hydrophobic and positively charged, containing lysine or arginine residues at every third position. It is postulated that these residues move perpendicular to the plane of the membrane under the influence of the transmembrane potential, initiating a series of conformational changes in all four domains which leads to the open state of the channel (Catterall, 1988; Figure 15 2). Figure 152. Structure and Function of Voltage-Gated Na+ Channels. A. A twodimensional representation of the (center), 1 (left), and 2 (right) subunits of the voltage-gated Na + channel from mammalian brain. The polypeptide chains are represented by continuous lines with length approximately proportional to the actual length of each segment of the channel protein. Cylinders represent regions of transmembrane helices. indicates sites of demonstrated N-linked glycosylation. Note the repeated structure of the four homologous domains (I through IV) of the subunit. Voltage Sensing. The S4 transmembrane segments in each homologous domain of the subunit serve as voltage sensors. (+) represents the positively charged amino acid residues at every third position within these segments. Electrical field (negative inside) exerts a force on these charged amino acid residues, pulling them toward the intracellular side of the membrane. Pore. The S5 and S6 transmembrane segments and the short membrane-associated loops between them (segments SS1 and SS2, see Figure 153) form the walls of the pore in the center of an approximately symmetrical square array of the four homologous domains (see Panel B). The amino acid residues indicated by circles in segment SS2 are critical for determining the conductance and ion selectivity of the Na+ channel and its ability to bind the extracellular pore blocking toxins tetrodotoxin and saxitoxin. Inactivation. The short intracellular loop connecting homologous domains III and IV serves as the inactivation gate of the Na+ channel. It is thought to fold into the intracellular mouth of the pore and occlude it within a few milliseconds after the channel opens. Three hydrophobic residues (isoleucinephenylalaninemethionine, IFM) at the position marked h appear to serve as an inactivation particle, entering the intracellular mouth of the pore and binding to an inactivation gate receptor there. Modulation. The gating of the Na+ channel can be modulated by protein phosphorylation. Phosphorylation of the inactivation gate between homologous domains III and IV by protein kinase C slows inactivation. Phosphorylation of sites in the intracellular loop between homologous domains I and II by either protein kinase C or cyclic AMP-dependent protein kinase reduces Na + channel activation. B. The four homologous domains of the Na+ channel subunit are illustrated as a square array as viewed looking down on the membrane. The sequence of conformational changes that the Na+ channel undergoes during activation and inactivation is diagrammed. Upon depolarization, each of the four homologous domains undergoes a conformational change in sequence to an activated state. After all four domains have activated, the Na+ channel can open. Within a few milliseconds after opening, the inactivation gate between domains III and IV closes over the intracellular mouth of the channel and occludes it, preventing further ion conductance. (Adapted from Catterall, 1988, with permission.)

Figure 153. The Local Anesthetic Receptor Site. Transmembrane segment S6 in domain IV (IVS6) is illustrated as an helix along with adjacent short segments SS1 and SS2 that contribute to formation of the extracellular mouth of the pore. Each circle represents an amino acid residue in segment IVS6. The three critical residues for formation of the local anesthetic binding site are shaded blue. The local anesthetic lidocaine is shown docked to two of these residues, which are phenylalanine (F) 1764 and tyrosine (Y) 1771. The third shaded residue is isoleucine (I) 1760. Substitution of a smaller alanine residue at this position by site-directed mutagenesis allows local anesthetics to reach their receptor site from outside the membrane. This residue therefore is assumed to form the outer boundary of the receptor site (see Ragsdale et al., 1994).

The transmembrane pore of the Na+ channel is thought to be surrounded by the S5 and S6 transmembrane helices and the short membrane-associated segments between them, designated SS1 and SS2. Amino acid residues in these short segments are the most critical determinants of the ion conductance and selectivity of the channel. After it opens, the Na+ channel inactivates within a few milliseconds due to closure of an inactivation gate. This functional gate is formed by the short intracellular loop of protein that connects homologous domains III and IV (Figure 152). The loop folds over the intracellular mouth of the transmembrane pore during the process of inactivation. It may bind to an inactivation gate "receptor" formed by the intracellular mouth of the pore. Amino acid residues that are important for local anesthetic binding are found in the S6 segment in domain IV (Ragsdale et al., 1994). Hydrophobic amino acid residues near the center and the intracellular end of the S6 segment may interact directly with bound local anesthetics (Figure 153). Experimental mutation of a large hydrophobic amino acid residue (isoleucine) to a smaller one (alanine) near the extracellular end of this segment creates a pathway for access of charged local anesthetic drugs from the extracellular solution to the receptor site. These findings place the local anesthetic receptor site within the intracellular half of the transmembrane pore of the Na+ channel, with part of its structure contributed by amino acids in the S6 segment of domain IV. Frequency- and Voltage-Dependence of Local Anesthetic Action The degree of block produced by a given concentration of local anesthetic depends on how the

nerve has been stimulated and on its resting membrane potential. Thus, a resting nerve is much less sensitive to a local anesthetic than one that is repetitively stimulated; higher frequency of stimulation and more positive membrane potential cause a greater degree of anesthetic block. These frequency- and voltage-dependent effects of local anesthetics occur because the local anesthetic molecule in its charged form gains access to its binding site within the pore only when the Na+ channel is in an open state and because the local anesthetic binds more tightly to and stabilizes the inactivated state of the Na+ channel (see Courtney and Strichartz, 1987; Butterworth and Strichartz, 1990). Local anesthetics exhibit these properties to different extents depending on their pKa, lipid solubility, and molecular size. In general, the frequency dependence of local anesthetic action depends critically on the rate of dissociation from the receptor site in the pore of the Na+ channel. A high frequency of stimulation is required for rapidly dissociating drugs so that drug binding during the action potential exceeds drug dissociation between action potentials. Dissociation of smaller and more hydrophobic drugs is more rapid, so a higher frequency of stimulation is required to yield frequency-dependent block. Frequency-dependent block of ion channels is most important for the actions of antiarrhythmic drugs. (see Chapter 35: Antiarrhythmic Drugs). Differential Sensitivity of Nerve Fibers to Local Anesthetics Although there is great individual variation, for most patients treatment with local anesthetics causes the sensation of pain to disappear first followed by the sensations of temperature, touch, deep pressure, and finally motor function (Table 151). Classical experiments with intact nerves showed that the wave in the compound action potential, which represents slowly conducting, small-diameter myelinated fibers, was reduced more rapidly and at lower concentrations of cocaine than was the wave, which represents rapidly conducting, large-diameter fibers (Gasser and Erlanger, 1929). In general, autonomic fibers, small unmyelinated C fibers (mediating pain sensations), and small myelinated A fibers (mediating pain and temperature sensations) are blocked before the larger myelinated A , A , and A fibers (mediating postural, touch, pressure, and motor information; reviewed in Raymond and Gissen, 1987). The differential rate of block exhibited by fibers mediating different sensations is of considerable practical importance in use of local anesthetics. The precise mechanisms responsible for this apparent specificity of local anesthetic action on pain fibers are not known, but several factors may contribute. The initial hypothesis from the classical work on intact nerves was that sensitivity to local anesthetic block decreases with increasing fiber size, consistent with high sensitivity for pain sensation mediated by small fibers and low sensitivity for motor function mediated by large fibers (Gasser and Erlanger, 1929). However, when nerve fibers are dissected from nerves to allow direct measurement of action potential generation, no clear correlation of the concentration dependence of local anesthetic block with fiber diameter is observed (Franz and Perry, 1974; Fink and Cairns, 1984; Huang et al., 1997). Therefore, it is unlikely that the fiber size per se determines the sensitivity to local anesthetic block under steadystate conditions. However, the spacing of nodes of Ranvier increases with the size of nerve fibers. Because a fixed number of nodes must be blocked to prevent conduction, small fibers with closely spaced nodes of Ranvier may be blocked more rapidly during treatment of intact nerves, because the local anesthetic reaches a critical length of nerve more rapidly (Franz and Perry, 1974). Differences in tissue barriers and location of smaller C fibers and A fibers in nerves also may influence the rate of local anesthetic action. Effect of Ph Local anesthetics tend to be only slightly soluble as unprotonated amines. Therefore, they are generally marketed as water-soluble salts, usually hydrochlorides. Inasmuch as the local anesthetics

are weak bases (typical pK a values range from 8 to 9), their hydrochloride salts are mildly acidic. This property increases the stability of the local anesthetic esters and any accompanying vasoconstrictor substance. Under usual conditions of administration, the pH of the local anesthetic solution rapidly equilibrates to that of the extracellular fluids. Although the unprotonated species of the local anesthetic is necessary for diffusion across cellular membranes, it is the cationic species that interacts preferentially with Na+ channels. This conclusion has been supported by the results of experiments on anesthetized mammalian nonmyelinated fibers (Ritchie and Greengard, 1966). In these experiments, conduction could be blocked or unblocked merely by adjusting the pH of the bathing medium to 7.2 or 9.6, respectively, without altering the amount of anesthetic present. The primary role of the cationic form also has been demonstrated clearly by Narahashi and colleagues, who perfused the extracellular and axoplasmic surface of the giant squid axon with tertiary and quaternary amine local anesthetics (Narahashi and Frazier, 1971). However, the unprotonated molecular forms also possess some anesthetic activity (Butterworth and Strichartz, 1990). Prolongation of Action by Vasoconstrictors The duration of action of a local anesthetic is proportional to the time during which it is in contact with nerve. Consequently, maneuvers that keep the drug at the nerve prolong the period of anesthesia. Cocaine itself constricts blood vessels by potentiating the action of norepinephrine (see Chapters 6 and 10), thereby preventing its own absorption. In clinical practice, preparations of local anesthetics often contain a vasoconstrictor, usually epinephrine. The vasoconstrictor performs a dual service. By decreasing the rate of absorption, it not only localizes the anesthetic at the desired site but also allows the rate at which it is destroyed in the body to keep pace with the rate at which it is absorbed into the circulation. This reduces its systemic toxicity. It should be noted, however, that epinephrine also dilates skeletal muscle vascular beds through actions at 2-adrenergic receptors and, therefore, has the potential to increase systemic toxicity of anesthetic deposited in muscle tissue. Some of the vasoconstrictor agent may be absorbed systemically, occasionally to an extent sufficient to cause untoward reactions (see below). There also may be delayed wound healing, tissue edema, or necrosis after local anesthesia. These effects seem to occur partly because sympathomimetic amines increase the oxygen consumption of the tissue; this, together with the vasoconstriction, leads to hypoxia and local tissue damage. The use of vasoconstrictors in localanesthetic preparations for anatomical regions with limited collateral circulation could produce irreversible hypoxic damage, tissue necrosis, and gangrene and therefore is contraindicated. Undesired Effects of Local Anesthetics In addition to blocking conduction in nerve axons in the peripheral nervous system, local anesthetics interfere with the function of all organs in which conduction or transmission of impulses occurs. Thus, they have important effects on the central nervous system (CNS), the autonomic ganglia, the neuromuscular junction, and all forms of muscle (for review see Covino, 1987; Garfield and Gugino, 1987; Gintant and Hoffman, 1987). The danger of such adverse reactions is proportional to the concentration of local anesthetic achieved in the circulation. Central Nervous System Following absorption, local anesthetics may cause stimulation of the CNS, producing restlessness and tremor that may proceed to clonic convulsions. In general, the more potent the anesthetic, the

more readily convulsions may be produced. Alterations of CNS activity are thus predictable from the local anesthetic agent in question and the blood concentration achieved. Central stimulation is followed by depression; death usually is caused by respiratory failure. The apparent stimulation and subsequent depression produced by applying local anesthetics to the CNS presumably is due solely to depression of neuronal activity; a selective depression of inhibitory neurons is thought to account for the excitatory phase in vivo. Rapid systemic administration of local anesthetics may produce death with no or only transient signs of CNS stimulation. Under these conditions, the concentration of the drug probably rises so rapidly that all neurons are depressed simultaneously. Airway control and support of respiration are essential features of treatment in the late stage of intoxication. Benzodiazepines or rapidly acting barbiturates administered intravenously are the drugs of choice for both the prevention and arrest of convulsions (see Chapter 17: Hypnotics and Sedatives). Although drowsiness is the most frequent complaint that results from the CNS actions of local anesthetics, lidocaine may produce dysphoria or euphoria and muscle twitching. Moreover, both lidocaine and procaine may produce a loss of consciousness that is preceded only by symptoms of sedation (see Covino, 1987). Whereas other local anesthetics also show the effect, cocaine has a particularly prominent effect on mood and behavior. These effects of cocaine and its potential for abuse are discussed in Chapter 24: Drug Addiction and Drug Abuse. Cardiovascular System Following systemic absorption, local anesthetics act on the cardiovascular system (see Covino, 1987). The primary site of action is the myocardium, where decreases in electrical excitability, conduction rate, and force of contraction occur. In addition, most local anesthetics cause arteriolar dilation. Untoward cardiovascular effects usually are seen only after high systemic concentrations are attained and effects on the CNS are produced. However, on rare occasions, lower doses of some local anesthetics will cause cardiovascular collapse and death, probably due to either an action on the pacemaker or the sudden onset of ventricular fibrillation. It should be noted that ventricular tachycardia and fibrillation are relatively uncommon consequences of local anesthetics other than bupivacaine. The effects of local anesthetics such as lidocaine and procainamide, which also are used as antiarrhythmic drugs, are discussed in Chapter 35: Antiarrhythmic Drugs. Finally, it should be stressed that untoward cardiovascular effects of local anesthetic agents may result from their inadvertent intravascular administration, especially if epinephrine also is present. Smooth Muscle The local anesthetics depress contractions in the intact bowel and in strips of isolated intestine (see Zipf and Dittmann, 1971). They also relax vascular and bronchial smooth muscle, although low concentrations initially may produce contraction (see Covino, 1987). Spinal and epidural anesthesia, as well as instillation of local anesthetics into the peritoneal cavity, cause sympathetic nervous system paralysis, which can result in increased tone of gastrointestinal musculature (see below). Local anesthetics may increase the resting tone and decrease the contractions of isolated human uterine muscle; however, uterine contractions seldom are depressed directly during intrapartum regional anesthesia. Neuromuscular Junction and Ganglionic Synapse Local anesthetics also affect transmission at the neuromuscular junction. Procaine, for example, can block the response of skeletal muscle to maximal motor-nerve volleys and to acetylcholine at

concentrations where the muscle responds normally to direct electrical stimulation. Similar effects occur at autonomic ganglia. These effects are due to block of the ion channel of the acetylcholine receptor by high concentrations of the local anesthetics (Neher and Steinbach, 1978; Charnet et al., 1990). Hypersensitivity to Local Anesthetics Rare individuals are hypersensitive to local anesthetics. The reaction may manifest itself as an allergic dermatitis or a typical asthmatic attack (see Covino, 1987). It is important to distinguish allergic reactions from toxic side effects and from the effects of coadministered vasoconstrictors. Hypersensitivity seems to occur almost exclusively with local anesthetics of the ester type and frequently extends to chemically related compounds. For example, individuals sensitive to procaine also may react to structurally similar compounds (e.g., tetracaine) through reaction to a common metabolite. Although agents of the amide type are essentially free of this problem, solutions of such agents may contain preservatives such as methylparaben that may provoke an allergic reaction (Covino, 1987). Local anesthetic preparations containing a vasoconstrictor also may elicit allergic responses due to the sulfite contained in them. Metabolism of Local Anesthetics The metabolic fate of local anesthetics is of great practical importance, because their toxicity depends largely on the balance between their rates of absorption and elimination. As noted above, the rate of absorption of many anesthetics can be reduced considerably by the incorporation of a vasoconstrictor agent in the anesthetic solution. However, the rate of destruction of local anesthetics varies greatly, and this is a major factor in determining the safety of a particular agent. Since toxicity is related to the free concentration of drug, binding of the anesthetic to proteins in the serum and to tissues reduces the concentration of free drug in the systemic circulation and, consequently, reduces toxicity. For example, in intravenous regional anesthesia of an extremity, about half of the original anesthetic dose is still tissue bound 30 minutes after release of the tourniquet; the lungs also bind large quantities of local anesthetic (Arthur, 1987). Some of the common local anesthetics (e.g., tetracaine) are esters. They are hydrolyzed and inactivated primarily by a plasma esterase, probably plasma cholinesterase. The liver also participates in hydrolysis of local anesthetics. Since spinal fluid contains little or no esterase, anesthesia produced by the intrathecal injection of an anesthetic agent will persist until the local anesthetic agent has been absorbed into the circulation. The amide-linked local anesthetics are, in general, degraded by the hepatic endoplasmic reticulum, the initial reactions involving N-dealkylation and subsequent hydrolysis (Arthur, 1987). However, with prilocaine, the initial step is hydrolytic, forming o-toluidine metabolites that can cause methemoglobinemia. Caution is indicated in the extensive use of amide-linked local anesthetics in patients with severe hepatic disease. The amide-linked local anesthetics are extensively (55% to 95%) bound to plasma proteins, particularly 1-acid glycoprotein. Many factors increase the concentration of this plasma protein (cancer, surgery, trauma, myocardial infarction, smoking, uremia) or decrease it (oral contraceptive agents). This results in changes in the amount of anesthetic delivered to the liver for metabolism, thus influencing systemic toxicity. Age-related changes in protein binding of local anesthetics also occur. The neonate is relatively deficient in plasma proteins that bind local anesthetics and thereby has greater susceptibility to toxicity. Plasma proteins are not the sole determinant of local anesthetic availability. Uptake by the lung also may play an important role in the distribution of amide-linked local anesthetics in the body

Cocaine Chemistry As outlined in the introduction above, cocaine occurs in abundance in the leaves of the coca shrub and is an ester of benzoic acid and methylecgonine. Ecgonine is an amino alcohol base closely related to tropine, the amino alcohol in atropine. It has the same fundamental structure as the synthetic local anesthetics (see Figure 151). Pharmacological Actions and Preparations The clinically desired actions of cocaine are the blockade of nerve impulses, as a consequence of its local anesthetic properties, and local vasoconstriction, secondary to inhibition of local norepinephrine reuptake. Toxicity and its potential for abuse have steadily decreased the clinical uses of cocaine. Its high toxicity is due to block of catecholamine uptake in both the central and peripheral nervous systems. Its euphoric properties are due primarily to inhibition of catecholamine uptake, particularly dopamine, at central nervous system synapses. Other local anesthetics do not block the uptake of norepinephrine and do not produce the sensitization to catecholamines, vasoconstriction, or mydriasis characteristic of cocaine. Currently, cocaine is used primarily for topical anesthesia of the upper respiratory tract, where its combined vasoconstrictor and local anesthetic properties provide anesthesia and shrinking of the mucosa with a single agent. Cocaine hydrochloride is used as a 1%, 4%, or 10% solution for topical application. For most applications, the 1% or 4% preparation is preferred to reduce toxicity. Because of its abuse potential, cocaine is listed as a schedule II drug by the United States Drug Enforcement Agency. Lidocaine Lidocaine (XYLOCAINE , others), introduced in 1948, is now the most widely used local anesthetic. The chemical structure of lidocaine is shown in Figure 151. Pharmacological Actions The pharmacological actions that lidocaine shares with other local anesthetic drugs have been described. Lidocaine produces faster, more intense, longer-lasting, and more extensive anesthesia than does an equal concentration of procaine. Unlike procaine, it is an aminoethylamide and is the prototypical member of the amide class of local anesthetics. It is an alternative choice for individuals sensitive to ester-type local anesthetics. Absorption, Fate, and Excretion Lidocaine is absorbed rapidly after parenteral administration and from the gastrointestinal and respiratory tracts. Although it is effective when used without any vasoconstrictor, in the presence of epinephrine the rate of absorption and the toxicity are decreased, and the duration of action usually is prolonged. In addition to preparations for injection, an iontophoretic, needle-free drug-delivery system for a lidocaine and epinephrine solution (IONTOCAINE ) is available. This system generally is used for dermal procedures and provides anesthesia to a depth of up to 10 mm. Lidocaine is dealkylated in the liver by mixed-function oxidases to monoethylglycine xylidide and glycine xylidide, which can be metabolized further to monoethylglycine and xylidide. Both monoethylglycine xylidide and glycine xylidide retain local anesthetic activity. In human beings, about 75% of the xylidide is excreted in the urine as the further metabolite 4-hydroxy-2,6-

dimethylaniline (see Arthur, 1987). Toxicity The side effects of lidocaine seen with increasing dose include drowsiness, tinnitus, dysgeusia, dizziness, and twitching. As the dose increases, seizures, coma, and respiratory depression and arrest will occur. Clinically significant cardiovascular depression usually occurs at serum lidocaine levels that produce marked CNS effects. The metabolites monoethylglycine xylidide and glycine xylidide may contribute to some of these side effects. Clinical Uses Lidocaine has a wide range of clinical uses as a local anesthetic; it has utility in almost any application where a local anesthetic of intermediate duration is needed. Lidocaine also is used as an antiarrhythmic agent (see Chapter 35: Antiarrhythmic Drugs). Bupivacaine Pharmacological Actions Bupivacaine (MARCAINE, SENSORCAINE), introduced in 1963, is a widely used amide local anesthetic; its structure is similar to that of lidocaine except that the amine-containing group is a butyl piperidine (Figure 151). It is a potent agent capable of producing prolonged anesthesia. Its long duration of action plus its tendency to provide more sensory than motor block has made it a popular drug for providing prolonged analgesia during labor or the postoperative period. By taking advantage of indwelling catheters and continuous infusions, bupivacaine can be used to provide several days of effective analgesia. Toxicity Bupivacaine (and etidocaine, below) are more cardiotoxic than equieffective doses of lidocaine. Clinically, this is manifested by severe ventricular arrhythmias and myocardial depression after inadvertent intravascular administration of large doses of bupivacaine. The enhanced cardiotoxicity of bupivacaine probably is due to multiple factors. Lidocaine and bupivacaine both rapidly block cardiac Na+ channels during systole. However, bupivacaine dissociates much more slowly than does lidocaine during diastole, so a significant fraction of Na+ channels remains blocked at the end of diastole (at physiological heart rates) with bupivacaine (Clarkson and Hondeghem, 1985). Thus the block by bupivacaine is cumulative and substantially more than would be predicted by its local anesthetic potency. At least a portion of the cardiac toxicity of bupivacaine may be mediated centrally, as direct injection of small quantities of bupivacaine into the medulla can produce malignant ventricular arrhythmias (Thomas et al., 1986). Bupivacaine-induced cardiac toxicity can be very difficult to treat, and its severity is enhanced in the presence of acidosis, hypercarbia, and hypoxemia. Other Synthetic Local Anesthetics The number of synthetic local anesthetics is so large that it is impractical to consider them all here. Some local anesthetic agents are too toxic to be given by injection. Their use is restricted to topical application to the eye (see Chapter 66: Ocular Pharmacology), the mucous membranes, or the skin (see Chapter 65: Dermatological Pharmacology). Many local anesthetics are suitable, however, for infiltration or injection to produce nerve block; some of them also are useful for topical application.

The main categories of local anesthetics are given below; agents are listed alphabetically. Local Anesthetics Suitable for Injection Chloroprocaine Chloroprocaine (NESACAINE), an ester local anesthetic introduced in 1952, is a chlorinated derivative of procaine (Figure 151). Its major assets are its rapid onset and short duration of action and its reduced acute toxicity due to its rapid metabolism (plasma half-life approximately 25 seconds). Enthusiasm for its use has been tempered by reports of prolonged sensory and motor block after epidural or subarachnoid administration of large doses. This toxicity appears to have been a consequence of low pH and the use of sodium metabisulfite as a preservative in earlier formulations. There are no reports of neurotoxicity with newer preparations of chloroprocaine, which contain calcium EDTA as the preservative, although these preparations also are not recommended for intrathecal administration. A higher-than-expected incidence of muscular back pain following epidural anesthesia with 2-chloroprocaine also has been reported (Stevens et al., 1993). This back pain is thought to be due to tetany in the paraspinus muscles, which may be a consequence of Ca+2 binding by the EDTA included as a preservative; the incidence of back pain appears to be related to the volume of drug injected and its use for skin infiltration. Etidocaine Etidocaine (DURANEST), introduced in 1972, is a long-acting amino amide (Figure 151). Its onset of action is faster than that of bupivacaine and comparable to that of lidocaine, yet its duration of action is similar to that of bupivacaine. Compared to bupivacaine, etidocaine produces preferential motor blockade. Thus, while it is useful for surgery requiring intense skeletal muscle relaxation, its utility in labor or postoperative analgesia is limited. Its cardiac toxicity is similar to that of bupivacaine (see above). Mepivacaine Mepivacaine (CARBOCAINE, others), introduced in 1957, is an intermediate-acting amino amide (Figure 151). Its pharmacological properties are similar to those of lidocaine. Mepivacaine, however, is more toxic to the neonate and thus is not used in obstetrical anesthesia. The increased toxicity of mepivacaine in the neonate is related to ion trapping of this agent because of the lower pH of neonatal blood and the pK a of mepivacaine rather than to its slower metabolism in the neonate. It appears to have a slightly higher therapeutic index in adults than does lidocaine. Its onset of action is similar to that of lidocaine and its duration slightly longer (about 20%) than that of lidocaine in the absence of a coadministered vasoconstrictor. Mepivacaine is not effective as a topical anesthetic. Prilocaine Prilocaine (CITANEST) is an intermediate-acting amino amide (Figure 151). It has a pharmacological profile similar to that of lidocaine. The primary differences are that it causes little vasodilation and thus can be used without a vasoconstrictor, if desired, and its increased volume of distribution reduces its CNS toxicity, making it suitable for intravenous regional blocks (below). It is unique among the local anesthetics for its propensity to cause methemoglobinemia. This effect is a consequence of the metabolism of the aromatic ring to o-toluidine. Development of methemoglobinemia is dependent on the total dose administered, usually appearing after a dose of 8 mg/kg. In healthy persons, methemoglobinemia usually is not a problem. If necessary, it can be

treated by the intravenous administration of methylene blue (1 to 2 mg/kg). Methemoglobinemia following prilocaine has limited its use in obstetrical anesthesia, because it complicates evaluation of the newborn. Also, methemoglobinemia is more common in neonates due to decreased resistance of fetal hemoglobin to oxidant stresses and the immaturity of enzymes in the neonate that convert methemoglobin back to the ferrous state. Ropivacaine The cardiac toxicity of bupivacaine stimulated interest in developing a less toxic, long-lasting local anesthetic. The result of that search was the development of a new amino ethylamide, ropivacaine (NAROPIN ) (Figure 151), the S-enantiomer of 1-propyl-2 ',6 '-pipecoloxylidide. The S-enantiomer was chosen because, like most local anesthetics with a chiral center, it has a lower toxicity than the R-isomer (McClure, 1996). This is presumably due to slower uptake, resulting in lower blood levels for a given dose. Ropivacaine is slightly less potent than bupivacaine in producing anesthesia. In several animal models, it appears to be less cardiotoxic than equieffective doses of bupivacaine. In clinical studies, ropivacaine appears to be suitable for both epidural and regional anesthesia, with a duration of action similar to that of bupivacaine. Interestingly, it seems to be even more motorsparing than bupivacaine. Procaine Procaine (NOVOCAIN), introduced in 1905, was the first synthetic local anesthetic and is an amino ester (Figure 151). While it formerly was used widely, its use now is confined to infiltration anesthesia and occasionally for diagnostic nerve blocks. This is because of its low potency, slow onset, and short duration of action. While its toxicity is fairly low, it is hydrolyzed in vivo to produce paraaminobenzoic acid, which inhibits the action of sulfonamides. Thus, large doses should not be administered to patients taking sulfonamide drugs. Tetracaine Tetracaine (PONTOCAINE), introduced in 1932, is a long-acting amino ester (Figure 151). It is significantly more potent and has a longer duration of action than procaine. Tetracaine may exhibit increased systemic toxicity because it is more slowly metabolized than the other commonly used ester local anesthetics. Currently, it is widely used in spinal anesthesia when a drug of long duration is needed. Tetracaine also is incorporated into several topical anesthesic preparations. With the introduction of bupivacaine, tetracaine is rarely used in peripheral nerve blocks because of the large doses often necessary, its slow onset, and its potential for toxicity. Local Anesthetics Used Primarily to Anesthetize Mucous Membranes and Skin Some anesthetics are either too irritating or too ineffective to be applied to the eye. However, they are useful as topical anesthetic agents on the skin and/or mucous membranes. These preparations are effective in the symptomatic relief of anal and genital pruritus, poison ivy rashes, and numerous other acute and chronic dermatoses. They are sometimes combined with a glucocorticoid or antihistamine and are available in a number of proprietary formulations. Dibucaine (NUPERCAINAL) is a quinoline derivative. Its toxicity resulted in its removal from the United States market as an injectable preparation; however, it retains wide popularity outside the United States as a spinal anesthetic. It currently is available as a cream and an ointment for use on the skin.

Dyclonine hydrochloride (DYCLONE) has a rapid onset of action and a duration of effect comparable to that of procaine. It is absorbed through the skin and mucous membranes. The compound is used as 0.5% or 1.0% solution for topical anesthesia during endoscopy, for oral mucositis pain following radiation or chemotherapy, and for anogenital procedures. Pramoxine hydrochloride (ANUSOL, TRONOTHANE , others) is a surface anesthetic agent that is not a benzoate ester. Its distinct chemical structure (Figure 151) may help minimize the danger of crosssensitivity reactions in patients allergic to other local anesthetics. Pramoxine produces satisfactory surface anesthesia and is reasonably well tolerated on the skin and mucous membranes. It is too irritating to be used on the eye or in the nose. Various preparations, usually containing 1% pramoxine, are available for topical application. Anesthetics of Low Solubility Some local anesthetics are poorly soluble in water and, consequently, too slowly absorbed to be toxic. They can be applied directly to wounds and ulcerated surfaces, where they remain localized for long periods of time, producing a sustained anesthetic action. Chemically, they are esters of paraaminobenzoic acid lacking the terminal amino group possessed by the previously described local anesthetics. The most important member of the series is benzocaine (ethyl aminobenzoate; AMERICAINE ANESTHETIC, others). Benzocaine is structurally similar to procaine; the difference is that it lacks the terminal diethylamino group (Figure 151). It is incorporated into a large number of topical preparations. Benzocaine has been reported to cause methemoglobinemia (see text concerning methemoglobinemia caused by prilocaine, above); consequently, dosing recommendations must be carefully followed. Local Anesthetics Largely Restricted to Ophthalmological Use Anesthesia of the cornea and conjunctiva can be obtained readily by topical application of local anesthetics. However, most of the local anesthetics described above are too irritating for ophthalmological use. The first local anesthetic used in ophthalmology, cocaine, has the severe disadvantages of producing mydriasis and corneal sloughing and has fallen out of favor. The two compounds used most frequently today are proparacaine (ALCAINE, OPHTHAINE, others) and tetracaine (Figure 151). In addition to being less irritating during administration, proparacaine has the added advantage of bearing little antigenic similarity to the other benzoate local anesthetics. Thus, it sometimes can be used in individuals sensitive to the amino ester local anesthetics. For use in ophthalmology, these local anesthetics are instilled a single drop at a time. If anesthesia is incomplete, successive drops are applied until satisfactory conditions are obtained. The duration of anesthesia is determined chiefly by the vascularity of the tissue; thus it is longest in normal cornea and least in inflamed conjunctiva. In the latter case, repeated instillations are necessary to maintain adequate anesthesia for the duration of the procedure. Long-term administration of topical anesthesia to the eye has been associated with retarded healing, pitting and sloughing of the corneal epithelium, and predisposition of the eye to inadvertent injury. Thus, these drugs should not be prescribed for self-administration. For drug delivery, pharmacokinetic, and toxicity issues unique to drugs for ophthalmic use, see Chapter 66: Ocular Pharmacology. Tetrodotoxin and Saxitoxin These toxins are two of the most potent poisons known; the minimal lethal dose of each in the mouse is about 8 g/kg. Both toxins are responsible for fatal poisoning in human beings. Tetrodotoxin is found in the gonads and other visceral tissues of some fish of the order

Tetraodontiformes (to which the Japanese fugu, or puffer fish, belongs); it also occurs in the skin of some newts of the family Salamandridae and of the Costa Rican frog Atelopus. Saxitoxin, and possibly some related toxins, are elaborated by the dinoflagellates Gonyaulax catanella and Gonyaulax tamerensis and are retained in the tissues of clams and other shellfish that eat these organisms. Given the right conditions of temperature and light, the Gonyaulax may multiply so rapidly as to discolor the oceanhence the term red tide. Shellfish feeding on Gonyaulax at this time become extremely toxic to human beings and are responsible for periodic outbreaks of paralytic shellfish poisoning (see Kao, 1972; Ritchie, 1980). Although the toxins are chemically different from each other, their mechanism of action is similar (see Ritchie, 1980). Both toxins, in nanomolar concentrations, specifically block the outer mouth of the pore of Na+ channels in the membranes of excitable cells. As a result, the action potential is blocked. The receptor site for these toxins is formed by amino acid residues in the SS2 segment of the Na + channel subunit (see Figure 152) in all four domains (Terlau et al., 1991; Catterall, 2000). Not all Na+ channels are equally sensitive to tetrodotoxin; the channels in cardiac myocytes are resistant, and a tetrodotoxin-resistant + Na channel is expressed when skeletal muscle is denervated. Both toxins cause death by paralysis of the respiratory muscles; therefore, the treatment of severe cases of poisoning requires artificial ventilation. Blockade of vasomotor nerves, together with a relaxation of vascular smooth muscle, seems to be responsible for the hypotension that is characteristic of tetrodotoxin poisoning (Kao, 1972). Early gastric lavage and therapy to support the blood pressure also are indicated. If the patient survives paralytic shellfish poisoning for 24 hours, the prognosis is good (see Ogura, 1971; Schantz, 1971). Clinical Uses of Local Anesthetics Local anesthesia is the loss of sensation in a body part without the loss of consciousness or the impairment of central control of vital functions. It offers two major advantages. The first is that the physiological perturbations associated with general anesthesia are avoided; the second is that neurophysiological reponses to pain and stress can be modified beneficially. As discussed above, local anesthetics have the potential to produce deleterious side effects. The choice of a local anesthetic and care in its use are the primary determinants of such toxicity. There is a poor relationship between the amount of local anesthetic injected and peak plasma levels in adults. Furthermore, peak plasma levels vary widely depending on the area of injection. They are highest with interpleural or intercostal block and lowest with subcutaneous infiltration. Thus, recommended maximum doses serve only as general guidelines. The following discussion concerns the pharmacological and physiological consequences of the use of local anesthetics categorized by method of administration. A more comprehensive discussion of their use and administration is presented in standard anesthesiology texts (e.g., Cousins and Bridenbaugh, 1998). Topical Anesthesia Anesthesia of mucous membranes of the nose, mouth, throat, tracheobronchial tree, esophagus, and genitourinary tract can be produced by direct application of aqueous solutions of salts of many local anesthetics or by suspension of the poorly soluble local anesthetics. Tetracaine (2%), lidocaine (2% to 10%), and cocaine (1% to 4%) typically are used. Cocaine is used only in the nose, nasopharynx, mouth, throat, and ear. Cocaine has the unique advantage of producing vasoconstriction as well as anesthesia. The shrinking of mucous membranes decreases operative bleeding while improving surgical visualization. Comparable vasoconstriction can be achieved with other local anesthetics by the addition of a low concentration of a vasoconstrictor such as phenylephrine (0.005%). Epinephrine, topically applied, has no significant local effect and does not prolong the duration of

action of local anesthetics applied to mucous membranes because of poor penetration. Maximal safe total dosages for topical anesthesia in a healthy 70-kg adult are 300 mg for lidocaine, 150 mg for cocaine, and 50 mg for tetracaine. Peak anesthetic effect following topical application of cocaine or lidocaine occurs within 2 to 5 minutes (3 to 8 minutes with tetracaine), and anesthesia lasts for 30 to 45 minutes (30 to 60 minutes with tetracaine). Anesthesia is entirely superficial; it does not extend to submucosal structures. This technique does not alleviate joint pain or discomfort from subdermal inflammation or injury. Local anesthetics are absorbed rapidly into the circulation following topical application to mucous membranes or denuded skin. Thus, it must be kept in mind that topical anesthesia always carries the risk of systemic toxic reactions. Systemic toxicity has occurred even following the use of local anesthetics to control discomfort associated with severe diaper rash in infants. Absorption is particularly rapid when local anesthetics are applied to the tracheobronchial tree. Concentrations in blood after instillation of local anesthetics into the airway are nearly the same as those that follow intravenous injection. Surface anesthetics for the skin and cornea have been described above. The introduction of an eutectic mixture of lidocaine (2.5%) and prilocaine (2.5%) (EMLA ) bridges the gap between topical and infiltration anesthesia. The efficacy of this combination lies in the fact that the mixure of prilocaine and lidocaine has a melting point less than that of either compound alone, existing at room temperature as an oil that can penetrate intact skin. EMLA cream produces anesthesia to a maximum depth of 5 mm and is applied as a cream on intact skin under an occlusive dressing, which must be left in place for at least 1 hour. It is effective for procedures involving skin and superficial subcutaneous structures (e.g., venipuncture and skin graft harvesting). The component local anesthetics will be absorbed into the systemic circulation, potentially producing toxic effects (above). Guidelines are available to calculate the maximum amount of cream that can be applied and area of skin covered. It must not be used on mucous membranes or abraded skin, as rapid absorption across these surfaces may result in systemic toxicity. Infiltration Anesthesia Infiltration anesthesia is the injection of local anesthetic directly into tissue without taking into consideration the course of cutaneous nerves. Infiltration anesthesia can be so superficial as to include only the skin. It also can include deeper structures, including intraabdominal organs when these, too, are infiltrated. The duration of infiltration anesthesia can be approximately doubled by the addition of epinephrine (5 g/ml) to the injection solution; epinephrine also decreases peak concentrations of local anesthetics in blood. Epinephrine-containing solutions should not, however, be injected into tissues supplied by end arteriesfor example, fingers and toes, ears, the nose, and the penis. The intense vasoconstriction produced by epinephrine may result in gangrene. For the same reason, epinephrine should be avoided in solutions injected intracutaneously. Since epinephrine also is absorbed into the circulation, its use should be avoided in those for whom adrenergic stimulation is undesirable. The local anesthetics most frequently used for infiltration anesthesia are lidocaine (0.5% to 1.0%), procaine (0.5% to 1.0%), and bupivacaine (0.125% to 0.25%). When used without epinephrine, up to 4.5 mg/kg of lidocaine, 7 mg/kg of procaine, or 2 mg/kg of bupivacaine can be employed in adults. When epinephrine is added, these amounts can be increased by one-third. The advantage of infiltration anesthesia and other regional anesthetic techniques is that it is possible to provide satisfactory anesthesia without disruption of normal bodily functions. The chief

disadvantage of infiltration anesthesia is that relatively large amounts of drug must be used to anesthetize relatively small areas. This is no problem with minor surgery. When major surgery is performed, however, the amount of local anesthetic that is required makes systemic toxic reactions likely. The amount of anesthetic required to anesthetize an area can be reduced significantly and the duration of anesthesia increased markedly by specifically blocking the nerves that innervate the area of interest. This can be done at one of several levels: subcutaneously, at major nerves, or at the level of the spinal roots. Field Block Anesthesia Field block anesthesia is produced by subcutaneous injection of a solution of local anesthetic in such a manner as to anesthetize the region distal to the injection. For example, subcutaneous infiltration of the proximal portion of the volar surface of the forearm results in an extensive area of cutaneous anesthesia that starts 2 to 3 cm distal to the site of injection. The same principle can be applied with particular benefit to the scalp, the anterior abdominal wall, and the lower extremity. The drugs used and the concentrations and doses recommended are the same as for infiltration anesthesia. The advantage of field block anesthesia is that less drug can be used to provide a greater area of anesthesia than when infiltration anesthesia is used. Knowledge of the relevant neuroanatomy obviously is essential for successful field block anesthesia. Nerve Block Anesthesia Injection of a solution of a local anesthetic into or about individual peripheral nerves or nerve plexuses produces even greater areas of anesthesia than do the techniques described above. Blockade of mixed peripheral nerves and nerve plexuses also usually anesthetizes somatic motor nerves, producing skeletal muscle relaxation, which is essential for some surgical procedures. The areas of sensory and motor block usually start several centimeters distal to the site of injection. Brachial plexus blocks are particularly useful for procedures on the upper extremity and shoulder. Intercostal nerve blocks are effective for anesthesia and relaxation of the anterior abdominal wall. Cervical plexus block is appropriate for surgery of the neck. Sciatic and femoral nerve blocks are useful for surgery distal to the knee. Other useful nerve blocks prior to surgical procedures include blocks of individual nerves at the wrist and at the ankle, blocks of individual nerves such as the median or ulnar at the elbow, and blocks of sensory cranial nerves. There are four major determinants of the onset of sensory anesthesia following injection near a nerve. These are the proximity of the injection to the nerve, concentration and volume of drug, the degree of ionization of the drug, and time. Local anesthetic is never intentionally injected into the nerve, as this would be painful and could lead to nerve damage. Instead, the anesthetic agent is deposited as close to the nerve as possible. Thus the local anesthetic must diffuse from the site of injection into the nerve, where it acts. The rate of diffusion will be determined chiefly by the concentration of the drug, its degree of ionization (as ionized local anesthetic diffuses more slowly), its hydrophobicity, and the physical characteristics of the tissue surrounding the nerve. Higher concentrations of local anesthetic will result in a more rapid onset of peripheral nerve block. The utility of using higher concentrations, however, is limited by systemic toxicity as well as direct neural toxicity of concentrated local anesthetic solutions. Local anesthetics with lower pKa values tend to have a more rapid onset of action for a given concentration, because more drug is uncharged at neutral pH. For example, the onset of action of lidocaine occurs in about 3 minutes; 35% of lidocaine is in the basic form at pH 7.4. In contrast, the onset of action of bupivacaine requires about 15 minutes; only 5% to 10% of bupivacaine is in the basic (uncharged) form at this pH. Increased hydrophobicity might be expected to speed onset by increased penetration into nerve

tissue. However, it also will increase binding in tissue lipids. Furthermore, the more hydrophobic local anesthetics also are more potent (and toxic) and thus must be used at lower concentrations, decreasing the concentration gradient for diffusion. Tissue factors also play a role in determining the rate of onset of anesthetic effects. The amount of connective tissue that must be penetrated can be significant in a nerve plexus compared to isolated nerves and can serve to slow or even prevent adequate diffusion of local anesthetic to the nerve fibers. Duration of nerve block anesthesia depends on the physical characteristics of the local anesthetic used and the presence or absence of vasoconstrictors. Especially important physical characteristics are lipid solubility and protein binding. In general, local anesthetics can be divided into three categories: those with a short (20 to 45 minutes) duration of action in mixed peripheral nerves, such as procaine; those with an intermediate (60 to 120 minutes) duration of action, such as lidocaine and mepivacaine; and those with a long (400 to 450 minutes) duration of action, such as bupivacaine, etidocaine, ropivacaine, and tetracaine. Block duration of the intermediate-acting local anesthetics such as lidocaine can be prolonged by the addition of epinephrine (5 g/ml). The degree of block prolongation in peripheral nerves following the addition of epinephrine appears to be related to the intrinsic vasodilatory properties of the local anesthetic and thus is most pronounced with lidocaine. The types of nerve fibers that are blocked when a local anesthetic is injected about a mixed peripheral nerve depend on the concentration of drug used, nerve-fiber size, internodal distance, and frequency and pattern of nerve-impulse transmission (see above). Anatomical factors are similarly important. A mixed peripheral nerve or nerve trunk consists of individual nerves surrounded by an investing epineurium. The vascular supply is usually centrally located. When a local anesthetic is deposited about a peripheral nerve, it diffuses from the outer surface toward the core along a concentration gradient (DeJong, 1994; Winnie et al., 1977). Consequently, nerves in the outer mantle of the mixed nerve are blocked first. These fibers usually are distributed to more proximal anatomical structures than are those situated near the core of the mixed nerve and are often motor. If the volume and concentration of local anesthetic solution deposited about the nerve are adequate, the local anesthetic eventually will diffuse inwardly in amounts adequate to block even the most centrally located fibers. Lesser amounts of drug will block only nerves in the mantle and the smaller and more sensitive central fibers. Furthermore, since removal of local anesthetics occurs primarily in the core of a mixed nerve or nerve trunk, where the vascular supply is located, the duration of blockade of centrally located nerves is shorter than that of more peripherally situated fibers. Choice of local anesthetic, as well as the amount and concentration administered, is determined by the nerves and the types of fibers to be blocked, the duration of anesthesia required, and the size and health of the patient. For blocks of 2 to 4 hours, lidocaine (1.0% to 1.5%) can be used in the amounts recommended above (see"Infiltration Anesthesia"). Mepivacaine (up to 7 mg/kg of a 1.0% to 2.0% solution) provides anesthesia that lasts about as long as that from lidocaine. Bupivacaine (2 to 3 mg/kg of a 0.25% to 0.375% solution) can be used when a longer duration of action is required. Addition of 5 g/ml epinephrine prolongs duration and lowers the plasma concentration of the intermediate-acting local anesthetics. Peak concentrations of local anesthetics in blood depend on the amount injected, the physical characteristics of the local anesthetic, and whether or not epinephrine is used. They also are determined by the rate of blood flow to the site of injection and the surface area exposed to local anesthetic. This is of particular importance in the safe application of nerve block anesthesia, as the potential for systemic reactions also is related to peak free serum concentrations. For example, peak concentrations of lidocaine in blood following injection of 400 mg without epinephrine for intercostal nerve blocks average 7 g/ml; the same amount of lidocaine used for block of the brachial plexus results in peak concentrations in blood of approximately 3 g/ml (Covino and

Vassallo, 1976). The amount of local anesthetic that can be injected must, therefore, be adjusted according to the anatomical site of the nerve(s) to be blocked to minimize untoward effects. Addition of epinephrine can decrease peak plasma concentrations by 20% to 30%. Multiple nerve blocks (e.g., intercostal block) or blocks performed in vascular regions require reduction in the amount of anesthetic that can be given safely, because the surface area for absorption or the rate of absorption is increased. Intravenous Regional Anesthesia (Bier's Block) This technique relies on using the vasculature to bring the local anesthetic solution to the nerve trunks and endings. In this technique, an extremity is exsanguinated with an Esmarch (elastic) bandage, and a proximally located tourniquet is inflated to 100 to 150 mm Hg above the systolic blood pressure. The Esmarch bandage is removed, and the local anesthetic is injected into a previously cannulated vein. Typically, complete anesthesia of the limb ensues within 5 to 10 minutes. Pain from the tourniquet and the potential for ischemic nerve injury limits tourniquet inflation to 2 hours or less. However, the tourniquet should remain inflated for at least 15 to 30 minutes to prevent toxic amounts of local anesthetic from entering the circulation following deflation. Lidocaine, 40 to 50 ml (0.5 ml/kg in children) of a 0.5% solution, without epinephrine, is the drug of choice for this technique. For intravenous regional anesthesia in adults using a 0.5% solution without epinephrine, the dose administered should not exceed 4 mg/kg. A few clinicians prefer prilocaine (0.5%) over lidocaine because of its higher therapeutic index. The attractiveness of this technique lies in its simplicity. Its primary disadvantages are that it can be used only for a few anatomical regions, sensation (that is, pain) returns quickly after tourniquet deflation, and premature release or failure of the tourniquet can produce toxic levels of local anesthetic (e.g., 50 ml of 0.5% lidocaine contains 250 mg of lidocaine). For the last reason and because their longer durations of action offer no advantages, the more cardiotoxic local anesthetics, bupivacaine and etidocaine, are not recommended for this technique. Intravenous regional anesthesia is used most often for surgery of the forearm and hand but can be adapted for the foot and distal leg. Spinal Anesthesia Spinal anesthesia follows the injection of local anesthetic into the cerebrospinal fluid (CSF) in the lumbar space. This technique was first performed in human beings and described by Bier in 1899. For a number of reasons, including the ability to produce anesthesia of a considerable fraction of the body with a dose of local anesthetic that produces negligible plasma levels, it still remains one of the most popular forms of anesthesia. In most adults, the spinal cord terminates above the second lumbar vertebra; between that point and the termination of the thecal sac in the sacrum, the lumbar and sacral roots are bathed in CSF. Thus, in this region, there is a relatively large volume of CSF within which to inject drug, thereby minimizing the potential for direct nerve trauma. A brief discussion of the physiological effects of spinal anesthesia and those features relating to the pharmacology of the local anesthetics used are presented here. The technical performance and extensive discussion of the physiological consequences of spinal anesthesia are beyond the scope of this text (see Greene and Brull, 1993; Cousins and Bridenbaugh, 1998). Physiological Effects of Spinal Anesthesia Most of the physiological side effects of spinal anesthesia are a consequence of the sympathetic blockade produced by local anesthetic block of the sympathetic fibers in the spinal nerve roots. A thorough understanding of these physiological effects is necessary for the safe and successful application of spinal anesthesia. Although some of them may be deleterious and require treatment,

others can be beneficial for the patient or can improve operating conditions. Most sympathetic fibers leave the spinal cord between T1 and L2 (Chapter 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems, Figure 61). Although local anesthetic is injected below these levels in the lumbar portion of the dural sac, cephalad spread of the local anesthetic is seen with all but the smallest volumes injected. This cephalad spread is of considerable importance in the practice of spinal anesthesia and potentially is under the control of numerous variables, of which patient position and baricity (density of the drug relative to the density of the CSF) are the most important (Greene, 1983). The degree of sympathetic block is related to the height of sensory anesthesia; often the level of sympathetic blockade is several spinal segments higher, since the preganglionic sympathetic fibers are more sensitive to block by low concentrations of local anesthetic. The effects of sympathetic blockade involve both the actions (now partially unopposed) of the parasympathetic nervous system as well as the response of the unblocked portion of the sympathetic nervous system. Thus, as the level of sympathetic block ascends, the actions of the parasympathetic nervous system are increasingly dominant, and the compensatory mechanisms of the unblocked sympathetic nervous system are diminished. As most sympathetic nerve fibers leave the cord at T1 or below, few additional effects of sympathetic blockade are seen with cervical levels of spinal anesthesia. The consequences of sympathetic blockade will vary among patients as a function of age, physical conditioning, and disease state. Interestingly, sympathetic blockade during spinal anesthesia appears to be inconsequential in healthy children. Clinically, the most important effects of sympathetic blockade during spinal anesthesia are on the cardiovascular system. At all but the lowest levels of spinal blockade, some vasodilation will occur. Vasodilation is more marked on the venous than on the arterial side of the circulation, resulting in a pooling of blood in the venous capacitance vessels. At low levels of spinal anesthesia in healthy patients, this reduction in circulating blood volume is well tolerated. With an increasing level of block, this effect becomes more marked and venous return becomes gravity-dependent. If venous return decreases too much, cardiac output and organ perfusion precipitously decline. Venous return can be increased by modest (10 to 15) head-down tilt or by elevating the legs. At high levels of spinal blockade, the cardiac accelerator fibers, which exit the spinal cord at T1 to T4, will be blocked. This is detrimental in patients dependent on elevated sympathetic tone to maintain cardiac output (e.g., during congestive heart failure or hypovolemia), and it also removes one of the compensatory mechanisms available to maintain organ perfusion during vasodilation. Thus, as the level of spinal block ascends, the rate of cardiovascular compromise can accelerate if not carefully observed and treated. Sudden asystole also can occur, presumably because of loss of sympathetic innervation in the continued presence of parasympathetic activity at the sinoatrial node (Caplan, et al., 1988). In the usual clinical situation, blood pressure serves as a surrogate marker for cardiac output and organ perfusion. Treatment of hypotension usually is warranted when the blood pressure decreases to about 30% of resting values. Therapy is aimed at maintaining brain and cardiac perfusion and oxygenation. To achieve these goals, administration of oxygen, fluid infusion, manipulation of patient position as mentioned above, and the administration of vasoactive drugs are all options. In particular, patients typically are administered a bolus (500 to 1000 ml) of fluid prior to the administration of spinal anesthesia in an attempt to prevent some of the deleterious effects of spinal blockade. As the usual cause of hypotension is decreased venous return, possibly complicated by decreased heart rate, vasoactive drugs with preferential venoconstrictive and chronotropic properties are preferred. For this reason ephedrine, 5 to 10 mg intravenously, often is the drug of choice. In addition to the use of ephedrine to treat deleterious effects of sympathetic blockade, direct-acting 1-adrenergic receptor agonists such as phenylephrine (see Chapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists) can be administered either by bolus or continuous infusion. A beneficial effect of spinal anesthesia partially mediated by the sympathetic nervous system is on

the intestine. Sympathetic fibers originating from T5 to L1 inhibit peristalsis; thus, their blockade produces a small, contracted intestine. This, together with a flaccid abdominal musculature, produces excellent operating conditions for some types of bowel surgery. The effects of spinal anesthesia on the respiratory system mostly are mediated by effects on the skeletal musculature. Paralysis of the intercostal muscles will reduce a patient's ability to cough and clear secretions, which may be undesirable in a bronchitic or emphysematous patient and may produce dyspnea. It should be noted that respiratory arrest during spinal anesthesia is seldom due to paralysis of the phrenic nerves or to toxic levels of local anesthetic in the CSF of the fourth ventricle. It is much more likely to be due to medullary ischemia secondary to hypotension. Pharmacology of Spinal Anesthesia Currently in the United States, the drugs most commonly used in spinal anesthesia are lidocaine, tetracaine, and bupivacaine. Procaine occasionally is used for diagnostic blocks when a short duration of action is desired. The choice of local anesthetic primarily is determined by the duration of anesthesia desired. General guidelines are to use lidocaine for short procedures, bupivacaine for intermediate to long procedures, and tetracaine for long procedures. As mentioned above, the factors contributing to the distribution of local anesthetics in the CSF have received much attention because of their importance in determining the height of block. The most important pharmacological factors include the amount and, possibly, the volume of drug injected and its baricity. The speed of injection of the local anesthesia solution also may affect the height of the block, just as the position of the patient (see below) can influence the rate of distribution of the anesthetic agent and the height of blockade achieved. For a given preparation of local anesthetic, administration of increasing amounts leads to a fairly predictable increase in the level of block attained. For example, 100 mg of lidocaine, 20 mg of bupivacaine, or 12 mg of tetracaine usually will result in a T4 sensory block. More complete tables of these relationships can be found in standard anesthesiology texts. Epinephrine often is added to spinal anesthetics to increase the duration or intensity of block. Epinephrine's effect on duration of block is dependent on the technique used to measure duration. A commonly used measure of block duration is the length of time it takes for the block to recede by two dermatomes from the maximum height of the block, while a second is the duration of block at some specified level, typically L1. In most studies, addition of 200 g of epinephrine to tetracaine solutions prolongs the duration of block by both measures. However, addition of epinephrine to lidocaine or bupivacaine does not affect the first measure of duration but does prolong the block at lower levels. In different clinical situations, one or the other measure of anesthesia duration may be more relevant, and this must be kept in mind when deciding to add epinephrine to spinal local anesthetics. The mechanism of action of vasoconstrictors in prolonging spinal anesthesia is uncertain. It has been hypothesized that these agents decrease spinal cord blood flow, decreasing clearance of local anesthetic from the CSF, but this has not been convincingly demonstrated. Epinephrine and other -adrenergic agonists have been shown to decrease nociceptive transmission in the spinal cord, and studies in genetically modified mice suggest that 2A-adrenergic receptors play a principal role in this response (Stone et al., 1997). It is possible that these actions contribute to the effects of epinephrine. Drug Baricity and Patient Position The baricity of the local anesthetic injected will determine the direction of migration within the dural sac. Hyperbaric solutions will tend to settle in the dependent portions of the sac, while hypobaric solutions will tend to migrate in the opposite direction. Isobaric solutions usually will stay in the vicinity where they were injected, diffusing slowly in all directions. Consideration of the patient position during and after the performance of the block and the choice of a local anesthetic of the appropriate baricity is crucial for a successful block during some surgical procedures. For

example, a saddle (perineal) block is best performed with a hyperbaric anesthetic in the sitting position, with the patient remaining in that position until the anesthetic level has become "fixed." On the other hand, for a saddle block in the prone, jackknife position, a hypobaric local anesthetic is appropriate. Lidocaine and bupivacaine are marketed in both isobaric and hyperbaric preparations and, if desired, can be diluted with sterile, preservative-free water to make them hypobaric. Complications of Spinal Anesthesia Persistent neurological deficits following spinal anesthesia are extremely rare. Thorough evaluation of a suspected deficit should be performed in collaboration with a neurologist. Neurological sequelae can be both immediate and late. Possible causes include introduction of foreign substances (such as disinfectants or talc) into the subarachnoid space, infection, hematoma, or direct mechanical trauma. Aside from drainage of an abscess or hematoma, treatment usually is ineffective; thus, avoidance and careful attention to detail while performing spinal anesthesia are necessary. High concentrations of local anesthetic can cause irreversible block. After administration, local anesthetic solutions are diluted rapidly, quickly reaching nontoxic concentrations. However, there are several reports of transient or longer-lasting neurological deficits following lidocaine spinal anesthesia, particularly with 5% lidocaine (i.e., 180 mM) in 7.5% glucose (Hodgson et al., 1999). Spinal anesthesia sometimes is regarded as contraindicated in patients with preexisting disease of the spinal cord. No experimental evidence exists to support this hypothesis. Nonetheless, it is prudent to avoid spinal anesthesia in patients with progressive diseases of the spinal cord. However, spinal anesthesia may be very useful in patients with fixed, chronic spinal cord injury. A more common sequela following any lumbar puncture, including spinal anesthesia, is a postural headache with classic features. The incidence of headache decreases with increasing age of the patient and decreasing needle diameter. Headache following lumbar puncture must be thoroughly evaluated to exclude serious complications such as meningitis. Treatment usually is conservative, with bed rest and analgesics. If this approach fails, an epidural blood patch can be performed; this procedure usually is successful in alleviating postdural puncture headaches, although a second blood patch may be necessary. If two epidural blood patches are ineffective in relieving the headache, the diagnosis of postdural puncture headache should be reconsidered. Intravenous caffeine (500 mg as the benzoate salt administered over 4 hours) also has been advocated for the treatment of postdural puncture headache. However, the efficacy of caffeine is less than that of a blood patch, and relief usually is transient. Evaluation of Spinal Anesthesia Spinal anesthesia is a safe and effective technique. Its value is greatest during surgery involving the lower abdomen, the lower extremities, and the perineum. It often is combined with intravenous medication to provide sedation and amnesia. The physiological perturbations associated with low spinal anesthesia often have less potential harm than those associated with general anesthesia. The same does not apply for high spinal anesthesia. The sympathetic blockade that accompanies levels of spinal anesthesia adequate for mid- or upper-abdominal surgery, coupled with the difficulty in achieving visceral analgesia, is such that equally satisfactory and safer operating conditions can be realized by combining the spinal anesthetic with a "light" general anesthetic or by the administration of a general anesthetic and a neuromuscular blocking agent. Epidural Anesthesia Epidural anesthesia is administered by injecting local anesthetic into the epidural spacethe space

bounded by the ligamentum flavum posteriorly, the spinal periosteum laterally, and the dura anteriorly. Epidural anesthesia can be performed in the sacral hiatus (caudal anesthesia) or in the lumbar, thoracic, or cervical regions of the spine. Its current popularity arises from the development of catheters that can be placed into the epidural space, allowing either continuous infusions or repeated bolus administration of local anesthetics. The primary site of action of epidurally administered local anesthetics is on the spinal nerve roots. However, epidurally administered local anesthetics also may act on the spinal cord and on the paravertebral nerves. The selection of drugs available for epidural anesthesia is similar to that for major nerve blocks. As for spinal anesthesia, the choice of drugs to be used during epidural anesthesia is dictated primarily by the duration of anesthesia desired. However, when an epidural catheter is placed, short-acting drugs can be administered repeatedly, providing more control over the duration of block. Bupivacaine, 0.5% to 0.75%, is used when a long duration of surgical block is desired. Due to enhanced cardiotoxicity in pregnant patients, the 0.75% solution is not approved for obstetrical use. Lower concentrations0.25%, 0.125%, or 0.0625%of bupivacaine, often with 2 g/ml of fentanyl added, frequently are used to provide analgesia during labor. They also are useful preparations for providing postoperative analgesia in certain clinical situations. Etidocaine, 1.0% or 1.5%, is useful for providing surgical anesthesia with excellent muscle relaxation of long duration. Lidocaine, 2%, is the most frequently used intermediate-acting epidural local anesthetic. Chloroprocaine, 2% or 3%, provides rapid onset and a very short duration of anesthetic action. However, its use in epidural anesthesia has been clouded by controversy regarding its potential ability to cause neurological complications if the drug is accidentally injected into the subarachnoid space (see above). The duration of action of epidurally administered local anesthetics is frequently prolonged, and systemic toxicity decreased, by addition of epinephrine. Addition of epinephrine also makes inadvertent intravascular injection easier to detect and modifies the effect of sympathetic blockade during epidural anesthesia. For each anesthetic agent, a relationship exists between the volume of local anesthetic injected epidurally and the segmental level of anesthesia achieved. For example, in 20- to 40-year-old, healthy, nonpregnant patients, each 1 to 1.5 ml of 2% lidocaine will give an additional segment of anesthesia. The amount needed will decrease with increasing age and also will be decreased during pregnancy and in children. The concentration of local anesthetic used determines the type of nerve fibers blocked. The highest concentrations are used when sympathetic, somatic sensory, and somatic motor blockade are required. Intermediate concentrations allow somatic sensory anesthesia without muscle relaxation. Low concentrations will block only preganglionic sympathetic fibers. As an example, with bupivacaine these effects might be achieved with concentrations of 0.5%, 0.25%, and 0.0625%, respectively. The total amounts of drug that can be injected with safety at one time are approximately those mentioned above under "Nerve Block Anesthesia" and "Infiltration Anesthesia." Performance of epidural anesthesia requires a greater degree of skill than does spinal anesthesia. The technique of epidural anesthesia and the volumes, concentrations, and types of drugs used are described in detail in standard anesthesiology texts (e.g., see Cousins and Bridenbaugh, 1998). A significant difference between epidural and spinal anesthesia is that the dose of local anesthetic used can produce high concentrations in blood following absorption from the epidural space. Peak concentrations of lidocaine in blood following injection of 400 mg (without epinephrine) into the lumbar epidural space average 3 to 4 g/ml; peak concentrations of bupivacaine in blood average 1.0 g/ml after the lumbar epidural injection of 150 mg. Addition of epinephrine (5 g/ml) decreases peak plasma concentrations by about 25%. Peak blood concentrations are a function of

the total dose of drug administered rather than the concentration or volume of solution following epidural injection (Covino and Vassallo, 1976). The risk of inadvertent intravascular injection is increased in epidural anesthesia, as the epidural space contains a rich venous plexus. Another major difference between epidural and spinal anesthesia is that there is no zone of differential sympathetic blockade with epidural anesthesia; thus, the level of sympathetic block is close to the level of sensory block. Because epidural anesthesia does not result in the zone of differential sympathetic blockade that is observed during spinal anesthesia, cardiovascular responses to epidural anesthesia might be expected to be less prominent. In practice, however, this is not the case; this potential advantage of epidural anesthesia is offset by the cardiovascular responses to the high concentration of anesthetic in blood that occurs during epidural anesthesia. This is most apparent when, as is often the case, epinephrine is added to the epidural injection. The resulting concentration of epinephrine in blood is sufficient to produce significant 2-adrenergic receptor-mediated vasodilation. As a consequence, blood pressure decreases, even though cardiac output increases owing to the positive inotropic and chronotropic effects of epinephrine (see Chapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists). The result is peripheral hyperperfusion and hypotension. Differences in cardiovascular responses to equal levels of spinal and epidural anesthesia also are observed when a local anesthetic such as lidocaine is used without epinephrine. This may be a consequence of the direct effects of high concentrations of lidocaine on vascular smooth muscle and the heart. The magnitude of the differences in responses to equal sensory levels of spinal and epidural anesthesia varies, however, with the local anesthetic used for the epidural injection (assuming no epinephrine is used). For example, local anesthetics such as bupivacaine, which are highly lipid soluble, are distributed less into the circulation than are less lipid-soluble agents such as lidocaine. High concentrations of local anesthetics in blood during epidural anesthesia are of special importance when this technique is used to control pain during labor and delivery. Local anesthetics cross the placenta, enter the fetal circulation, and at high concentrations may cause depression of the neonate (Scanlon et al., 1974). The extent to which they do so is determined by dosage, acid base status, the level of protein binding in both maternal and fetal blood (Tucker, et al., 1970), placental blood flow, and solubility of the agent in fetal tissue. These concerns have been lessened by the trend toward using more dilute solutions of bupivacaine for labor analgesia. Epidural and Intrathecal Opiate Analgesia Small quantities of opioid injected intrathecally or epidurally produce segmental analgesia (Yaksh and Rudy, 1976). This observation led to the clinical use of spinal and epidural opioids during surgical procedures and for the relief of postoperative and chronic pain (Cousins and Mather, 1984). As with local anesthesia, analgesia is confined to sensory nerves that enter the spinal cord dorsal horn in the vicinity of the injection. Presynaptic opioid receptors inhibit the release of substance P and other neurotransmitters from primary afferents, while postsynaptic opioid receptors decrease the activity of certain dorsal horn neurons in the spinothalamic tracts (Willcockson, et al., 1986; see also Chapters 6 and 23). Since conduction in autonomic, sensory, and motor nerves is not affected by the opioids, blood pressure, motor function, and nonnociceptive sensory perception typically are not influenced by spinal opioids. The volume-evoked micturition reflex is inhibited, implicating opioid receptors in this reflex pathway. Clinically, this is manifest by urinary retention. Other side effects include pruritus and nausea and vomiting in susceptible individuals. Delayed respiratory depression and sedation, presumably from cephalad spread of opioid within the CSF, occurs infrequently with the doses of opioids currently used. Spinally administered opioids by themselves do not provide satisfactory anesthesia for surgical

procedures. Thus, opioids have found the greatest use in the treatment of postoperative and chronic pain. In selected patients, spinal or epidural opioids can provide excellent analgesia following thoracic, abdominal, pelvic, or lower extremity surgery without the side effects associated with high doses of systemically administered opioids. For postoperative analgesia, spinally administered morphine, 0.2 to 0.5 mg, usually will provide 8 to 16 hours of analgesia. Placement of an epidural catheter and repeated boluses or an infusion of opioid permits an increased duration of analgesia. Many opioids have been used epidurally. Morphine, 2 to 6 mg, every 6 hours, commonly is used for bolus injections, while fentanyl, 20 to 50 g/hour, often combined with bupivacaine, 5 to 20 mg/hour, is used for infusions. For cancer pain, repeated doses of epidural opioids can provide analgesia of several months' duration. The dose of epidural morphine, for example, is far less than the dose of systemically administered morphine that would be required to provide similar analgesia. This reduces the complications that usually accompany the administration of high doses of systemic opioids, particularly sedation and constipation. Unfortunately, like systemic opioids, tolerance will develop to the analgesic effects of epidural opioids, but this can usually be managed by increasing the dose.

Chapter 16. Therapeutic Gases: Oxygen, Carbon Dioxide, Nitric Oxide, and Helium
Oxygen Oxygen is a fundamental requirement for animal existence. Hypoxia is a life-threatening condition in which oxygen delivery is inadequate to meet the metabolic demands of the tissues. Since oxygen delivery is the product of blood flow and oxygen content, hypoxia may result from alterations in tissue perfusion, decreased oxygen tension in the blood, or decreased oxygen carrying capacity. In addition, hypoxia may result from a problem in oxygen transport from the microvasculature to the cells or in utilization within the cell. Irrespective of cause, an inadequate supply of oxygen ultimately results in the cessation of aerobic metabolism and oxidative phosphorylation, depletion of high-energy compounds, cellular dysfunction, and death. History Soon after Priestley's discovery of oxygen in 1772 and Lavoisier's elucidation of its role in respiration, oxygen therapy was introduced in England by Beddoes. His publication in 1794, entitled "Considerations on the Medicinal Use and Production of Factitious Airs," can be considered the beginning of inhalational therapy. Beddoes, overcome with enthusiasm for his project, treated all kinds of diseases with oxygen, including such diverse conditions as scrofula, leprosy, and paralysis. Such indiscriminate therapeutic applications naturally led to many failures, and Beddoes died a disconsolate man. Beddoes' assistant Sir Humphrey Davy went on to make significant contributions to our knowledge of the anesthetic gas nitrous oxide. It was as a result of pioneer investigations such as those of Haldane, Hill, Barcroft, Krogh, L. J. Henderson, and Y. Henderson that oxygen therapy achieved a sound physiological basis (Sackner, 1974). Although Paul Bert had studied therapeutic aspects of hyperbaric oxygen in 1870 and identified oxygen toxicity (Bert, 1873), the use of oxygen at pressures above 1 atmosphere for therapeutic purposes did not begin until the 1950s (Lambertsen et al., 1953; Boerema et al., 1960).

Normal Oxygenation Oxygen makes up 21% of air, which at sea level (1 atmosphere, 101 kPa) represents a partial pressure of 21 kPa (158 mmHg). While the fraction (percentage) of oxygen remains constant regardless of atmospheric pressure and altitude, the partial pressure of oxygen (PO2) decreases with lower atmospheric pressure. Since it is this partial pressure that drives the diffusion of oxygen, ascent to elevated altitude reduces the uptake and delivery of oxygen to the tissues. Conversely, increases in atmospheric pressure (hyperbaric therapy, or breathing at depth) increase the PO2 in inspired air and result in increased gas uptake. As the air is delivered to the distal airways and alveoli, the PO2 decreases by dilution with carbon dioxide and water vapor and by uptake into the blood. Under ideal conditions, when ventilation and perfusion are uniformly distributed, the alveolar PO2 will be approximately 14.6 kPa (110 mmHg). The corresponding alveolar partial pressures of water and carbon dioxide are 6.2 kPa (47 mmHg) and 5.3 kPa (40 mmHg), respectively. Under normal conditions, there is complete equilibration of alveolar gas and capillary blood, and the PO2 in end capillary blood is typically within a fraction of a kPa of that in the alveoli. Under conditions of disease, when the diffusion barrier for gas transport may be increased, or exercise, when high cardiac output reduces capillary transit time, full equilibration may not occur, and the alveolarend capillary PO2 gradient may be increased. The PO2 in arterial blood, however, is further reduced by venous admixture (shunt), the addition of mixed venous blood, which has a PO2 of approximately 5.3 kPa (40 mmHg). Together, the diffusional barrier, inhomogeneities of ventilation and perfusion, and the shunt fraction are the major causes of the alveolar-to-arterial oxygen gradient, which is normally 1.3 to 1.6 kPa (10 to 12 mmHg) when air is breathed and 4.0 to 6.6 kPa (30 to 50 mmHg) when 100% oxygen is breathed (Clark and Lambertsen, 1971). Oxygen is delivered to the tissue capillary beds by the circulation and again follows a gradient out of the blood and into cells. Tissue extraction of oxygen typically reduces the PO2 of venous blood by an additional 7.3 kPa (55 mmHg). The mean tissue PO2 is much lower than the value in the mixed venous blood because of substantial diffusional barriers and the consumption of oxygen in the tissues. Although the PO2 at the site of oxygen utilizationthe mitochondriais not known, oxidative phosphorylation can continue at a PO2 of only a few mm Hg (Robiolio et al., 1989). In the blood, oxygen is carried primarily in chemical combination with hemoglobin and to a small extent dissolved in solution. The quantity of oxygen combined with hemoglobin depends on the PO2, as illustrated by the sigmoid-shaped oxyhemoglobin dissociation curve (Figure 161). Hemoglobin is about 98% saturated with oxygen when air is breathed under normal circumstances, and it binds 1.3 ml of oxygen per gram when fully saturated. The steep slope of this curve with falling PO2 facilitates unloading of oxygen from hemoglobin at the tissue level and reloading when desaturated, mixed venous blood arrives at the lung. Shifting of the curve to the right with increasing temperature, increasing PCO2, and decreasing pH, as is found in metabolically active tissues, lowers the oxygen saturation for the same PO2 and thus delivers additional oxygen where and when it is most needed. However, the flattening of the curve with higher PO2 indicates that increasing blood PO2 by inspiring oxygen-enriched mixtures only minimally can increase the amount of oxygen carried by hemoglobin. Further increases in blood oxygen content can occur only by increasing the amount of oxygen dissolved in plasma. Because of the low solubility of oxygen (0.226 ml/liter per kPa or 0.03ml/liter per mm Hg at 37C), breathing 100% oxygen can increase the amount of oxygen in blood by only 15 ml per liter, less than one third of normal metabolic demands. However, if the inspired PO2 is increased to 3 atmospheres (304 kPa) in a hyperbaric chamber, the amount of dissolved oxygen is sufficient to meet normal metabolic demands even in

the absence of hemoglobin (Table 161). Figure 161. Oxyhemoglobin Dissociation Curve for Whole Blood. The relationship between PO2 and hemoglobin (Hb) saturation is shown. The P50, or the PO2 resulting in 50% saturation, is indicated as well. An increase in temperature or a decrease in pH (as in working muscle) shifts this relationship to the right, reducing the hemoglobin saturation at the same PO2 and thus aiding in the delivery of oxygen to the tissues.

Oxygen Deprivation An understanding of the causes and effects of oxygen deficiency is necessary for the rational therapeutic use of the gas. Hypoxia is the term used to denote insufficient oxygenation of the tissues. Hypoxemia generally implies a failure of the respiratory system to oxygenate arterial blood. Pulmonary Mechanisms of Hypoxemia Classically there are five causes of hypoxemia: low inspired oxygen fraction (FIO2), increased diffusion barrier, hypoventilation, ventilation/perfusion ( / ) mismatch, and shunt or venous admixture. Low FIO2 is a cause of hypoxemia only at high altitude or in the event of equipment failure, such as a gas blender malfunction or a mislabeled compressed-gas tank. An increase in the barrier to diffusion of oxygen within the lung is rarely a cause of hypoxemia in a resting subject, except in end-stage parenchymal lung disease. Both of these problems may be alleviated with administration of supplemental oxygen, the former by definition and the latter by increasing the gradient driving diffusion. Hypoventilation causes hypoxemia by reducing the alveolar PO2 in proportion to the build-up of CO2 in the alveoli. In essence, during hypoventilation there is decreased delivery of oxygen to the alveoli while its removal by the blood remains the same, causing its alveolar concentration to fall. The opposite occurs with carbon dioxide. This is described by the alveolar gas equation: PAO2= PIO2(PACO2/R), where PAO2 and PACO2 are the alveolar partial pressures of O2 and CO2, PIO2 the partial pressure of O2 in the inspired gas, and R the respiratory quotient. Under normal conditions,

breathing room air at sea level (corrected for the partial pressure of water vapor), the PIO2 is about 20 kPa (150 mmHg), the PACO2 about 5.3 kPa (40 mmHg), R is 0.8, and thus the PAO2 is normally around 13.3 kPa (100 mmHg). It would require substantial hypoventilation, with the PACO2 rising to over 9.3 kPa (70 mmHg), to cause the PAO2 to fall below 7.8 kPa (60 mmHg). This cause of hypoxemia is readily prevented by administration of even small amounts of supplemental oxygen. Shunt and / mismatch are related causes of hypoxemia, but with an important distinction in their responses to supplemental oxygen. Optimal gas exchange occurs when blood flow ( ) and ventilation ( ) are quantitatively matched. However, regional variations in / matching typically exist within the lung, particularly in the presence of lung disease. As ventilation increases relative to blood flow, the alveolar PO2 (PAO2) increases; but because of the flat shape of the oxyhemoglobin dissociation curve at high PO2 (Figure 161), this increased P AO2 does not contribute much to the oxygen content of the blood. In addition, high / ratio lung regions have a relatively reduced blood flow, eventually becoming pure dead space regions at the extremecontributing nothing to the oxygenation of the blood while decreasing the efficiency of CO2 removal. Conversely, as the / ratio falls and perfusion increases relative to ventilation, the PAO2 of the blood leaving these regions falls relative to regions with better matched ventilation and perfusion. Since the oxyhemoglobin dissociation curve is steep at these lower PO2 values, the oxygen saturation and content of the pulmonary venous blood falls significantly. At the extreme of low / ratios, there is no ventilation to a perfused region and pure shunt results, and the blood leaving the region has the same low PO2 and high PCO2 as mixed venous blood. The deleterious effect of / mismatch on arterial oxygenation is thus a direct result of the asymmetry of the oxyhemoglobin dissociation curve. Adding supplemental oxygen will generally make up for the fall in PAO2 in low / units and thus improve arterial oxygenation. However, since there is no ventilation to units with pure shunt, supplemental oxygen will not be effective in reversing the hypoxemia from this cause. Because of the steep oxyhemoglobin dissociation curve at low PO2, even moderate amounts of pure shunt will cause significant hypoxemia despite oxygen therapy (Figure 162). For the same reason, factors that decrease mixed venous P O2, such as decreased cardiac output or increased oxygen consumption, enhance the effects of / mismatch and shunt in causing hypoxemia.

Figure 162. Effect of Shunt on Arterial Oxygenation. The iso-shunt diagram shows the effect of changing inspired oxygen concentration on arterial oxygenation in the presence of different amounts of pure shunt. As shunt fraction increases, even an inspired oxygen fraction (FIO2 ) of 1.0 is ineffective at increasing the arterial PO2. This estimation assumes hemoglobin (Hb) from 10 to 14 g/dl, arterial P CO2 of 3.3 to 5.3 kPa (25 to 40 mmHg), and an arterial-venous (a-v) O2 content difference of 5 ml/100 ml. Redrawn from Benatar et al., 1973, with permission.

Nonpulmonary Causes of Hypoxia In addition to failure of the respiratory system to adequately oxygenate the blood, there are a number of other factors that can contribute to hypoxia at the tissue level. These may be divided into categories of oxygen delivery and oxygen utilization. Oxygen delivery decreases globally when cardiac output falls or locally when regional blood flow is compromised, as from a vascular occlusion (stenosis, thrombosis, microvascular occlusion) or increased downstream pressure to flow (compartment syndrome, venous stasis, or venous hypertension). Decreased oxygen-carrying capacity of the blood will likewise decrease oxygen delivery, as occurs with anemia, carbon monoxide poisoning, or hemoglobinopathy. Finally, hypoxia may occur when transport of oxygen from the capillaries to the tissues is decreased (edema) or utilization of the oxygen by the cells impaired (cyanide toxicity). Multiple causes of hypoxia often coexist. A victim of smoke inhalation may have an airway obstruction as a result of thermal injury and reduced oxygen carrying capacity because of carbon monoxide poisoning and anemia. An organ with a marginal blood supply because of atherosclerosis may be seriously damaged if the PO2 of its arterial supply is decreased only slightly. Effects of Hypoxia Regardless of the cause, hypoxia ultimately results in the cessation of aerobic metabolism, exhaustion of high-energy intracellular stores, cellular dysfunction, and death. The time course of cellular demise depends upon the tissue's relative metabolic requirements, oxygen and energy stores, and anaerobic capacity. Survival times (the time from the onset of circulatory arrest to significant organ dysfunction) range from 1 minute in the cerebral cortex to around 5 minutes in the heart and 10 minutes in the kidneys and liver, with the potential for some degree of recovery if reperfused. Revival times (the duration of hypoxia beyond which recovery is no longer possible) are approximately 4 to 5 times longer. Less severe degrees of hypoxia have progressive physiological

effects on different organ systems (Nunn, 1993b). Respiratory System Hypoxia stimulates the carotid and aortic baroreceptors to cause increases in both the rate and depth of ventilation. Minute volume almost doubles when normal individuals inspire gas with a PO2 of 6.6 kPa (50 mm Hg). Dyspnea is not always experienced with simple hypoxia but occurs when the respiratory minute volume approaches half the maximal breathing capacity; this may occur with minimum exertion in patients in whom maximal breathing capacity is reduced by lung disease. In general, little warning precedes the loss of consciousness resulting from hypoxia. Cardiovascular System Hypoxia causes reflex activation of the sympathetic nervous system via both autonomic and humoral mechanisms, resulting in tachycardia and increased cardiac output. Peripheral vascular resistance, however, decreases primarily via local autoregulatory mechanisms, with the net result that blood pressure is generally maintained unless hypoxia is prolonged or severe. In contrast to the systemic circulation, hypoxia causes pulmonary vasoconstriction and hypertension, an extension of the normal regional vascular response that matches perfusion with ventilation to optimize gas exchange in the lung (hypoxic pulmonary vasoconstriction). Central Nervous System (CNS) The CNS is least able to tolerate hypoxia. Hypoxia is manifest initially by decreased intellectual capacity and impaired judgment and psychomotor ability. This state progresses to confusion and restlessness and ultimately to stupor, coma, and death as the arterial PO2 decreases below 4 to 5.3 kPa (30 to 40 mm Hg). Victims often are unaware of this progression. Cellular and Metabolic Effects When the mitochondrial PO2 falls below about 0.13 kPa (1 mm Hg), anaerobic metabolism stops, and the less efficient anaerobic pathways of glycolysis become responsible for the production of cellular energy. End-products of anaerobic metabolism, such as lactic acid, may be released into the circulation in measurable quantities. Energy-dependent ion pumps slow and transmembrane ion + 2+ + gradients dissipate. Intracellular concentrations of Na , Ca , and H increase, finally leading to cell death. The time course of cellular demise depends on the relative metabolic demands, O2 storage capacity, and anaerobic capacity of the individual organs. Restoration of perfusion and oxygenation prior to hypoxic cell death paradoxically can result in an accelerated form of cell injury (ischemiareperfusion syndrome), thought to result from the generation of highly reactive oxygen free radicals (McCord, 1985). Adaptation to Hypoxia Long-term hypoxia results in adaptive physiological changes; these have been studied most thoroughly in persons exposed to high altitude. Adaptations include increased numbers of pulmonary alveoli, increased concentrations of hemoglobin in blood and myoglobin in muscle, and a decreased ventilatory response to hypoxia (Cruz et al., 1980). Short-term exposure to altitude produces similar adaptive changes. In susceptible individuals, however, acute exposure to high altitude may produce "acute mountain sickness," a syndrome characterized by headache, nausea, dyspnea, sleep disturbances, and impaired judgment, progressing to pulmonary and cerebral edema (Johnson and Rock, 1988). Mountain sickness is treated with supplemental oxygen, descent to

lower altitude, or an increase in ambient pressure. Diuretics (carbonic anhydrase inhibitors) and steroids also may be helpful. The syndrome usually can be avoided by a slow ascent to altitude, permitting time for adaptation to occur. It has been noted that certain aspects of fetal and newborn physiology are strongly reminiscent of adaptation mechanisms found in hypoxia-tolerant animals (Mortola, 1999; Singer, 1999), including shifts in the oxyhemoglobin dissociation curve (fetal hemoglobin), reduction in metabolic rate and body temperature (hibernation-like mode), reduction in heart rate and circulatory redistribution (as in diving mammals), and redirection of energy utilization from growth to maintenance metabolism. These adaptations help account for the relative tolerance of the fetus and neonate to both chronic (uterine insufficiency) and short-term hypoxia. Oxygen Inhalation Physiological Effects of Oxygen Inhalation The primary use for inhalation of oxygen is to reverse or prevent the development of hypoxia; other consequences usually are minor. However, when oxygen is breathed in excessive amounts or for prolonged periods, secondary physiological changes and toxic effects can occur. Respiratory System Inhalation of oxygen at 1 atmosphere or above causes a small degree of respiratory depression in normal subjects, presumably as a result of loss of tonic chemoreceptor activity. However, ventilation typically increases within a few minutes of oxygen inhalation because of a paradoxical increase in the tension of carbon dioxide in tissues. This increase results from the increased concentration of oxyhemoglobin in venous blood, which causes less efficient removal of carbon dioxide from the tissues (Lambertsen et al., 1953; Plewes and Farhi, 1983). In a small number of patients whose respiratory center is depressed by long-term retention of carbon dioxide, injury, or drugs, ventilation is maintained largely by stimulation of carotid and aortic chemoreceptors, commonly referred to as the hypoxic drive. The provision of too much oxygen can depress this drive, resulting in respiratory acidosis. In these cases, supplemental oxygen should be titrated carefully to ensure adequate arterial saturation. If hypoventilation results, then mechanical ventilatory support with or without tracheal intubation should be provided. Expansion of poorly ventilated alveoli is maintained in part by the nitrogen content of alveolar gas; nitrogen is poorly soluble and thus remains in the airspaces while oxygen is absorbed. High oxygen concentrations delivered to poorly ventilated lung regions can promote absorption atelectasis, occasionally resulting in an increase in shunt and a paradoxical worsening of hypoxemia after a period of oxygen administration. Cardiovascular System Aside from reversing the effects of hypoxia, the physiological consequences of oxygen inhalation on the cardiovascular system are of little significance. Heart rate and cardiac output are slightly reduced when 100% oxygen is breathed; blood pressure changes little. While pulmonary arterial pressure changes little in normal subjects with oxygen inhalation, elevated pulmonary artery pressures in patients living at altitude who have chronic hypoxic pulmonary hypertension may reverse with oxygen therapy or return to sea level (Grover et al., 1966; Spievogel et al., 1969). In particular, in neonates with congenital heart disease and left-to-right shunting of cardiac output,

oxygen supplementation must be carefully regulated because of the risk of further reducing pulmonary vascular resistance and increasing pulmonary blood flow. Metabolism Inhalation of 100% oxygen does not produce detectable changes in oxygen consumption, carbon dioxide production, respiratory quotient, or glucose utilization. Oxygen Administration Oxygen is supplied as a compressed gas in steel cylinders, and a purity of 99% is referred to as "medical grade." Most hospitals have oxygen piped from insulated liquid oxygen containers to areas of frequent use. For safety, oxygen cylinders and piping are color-coded (green in the United States), and some form of mechanical indexing of valve connections is used to prevent the connection of other gases to oxygen systems. Oxygen concentrators, which employ molecular sieve, membrane, or electrochemical technologies, are available for low-flow home use. Such systems produce 30% to 95% oxygen, depending on the flow rate. These devices reduce the resupply problem inherent in the use of compressed or liquid gas systems. Oxygen is delivered by inhalation except during extracorporeal circulation, when it is dissolved directly into the circulating blood. Only a closed delivery system, with an airtight seal to the patient's airway and complete separation of inspired from expired gases, can precisely control FIO2 . In all other systems, the actual delivered FIO2 will depend upon the ventilatory pattern (rate, tidal volume, inspiratory:expiratory time ratio, and inspiratory flow) and delivery system characteristics. Low-Flow Systems Low-flow systems, in which the oxygen flow is lower than the inspiratory flow rate, have a limited ability to raise the FIO2 because they depend upon entrained room air to make up the balance of the inspired gas. The FIO2 of these systems is extremely sensitive to small changes in the ventilatory pattern. Devices such as face tents are used primarily for delivering humidified gases to patients and cannot be relied upon to provide predictable amounts of supplemental oxygen. Nasal cannulae small, flexible prongs that sit just inside each narisdeliver oxygen at 1 to 6 liters/minute. The nasopharynx acts as a reservoir for storing the oxygen, and patients may breathe through either the mouth or nose as long as the nasal passages remain patent. These devices typically deliver 24% to 28% FIO2 at 2 to 3 liters/minute. Up to 40% FIO2 is possible at higher flow rates, although this is poorly tolerated for more than brief periods because of mucosal drying. The simple face-mask, a clear plastic mask with side holes for clearance of expiratory gas and inspiratory air entrainment, is used when higher concentrations of oxygen delivered without tight control are desired. The maximum FIO2 of a facemask can be increased from around 60% at 6 to 15 liters/minute to greater than 85% by adding a 600- to 1000-ml reservoir bag. With this partial rebreathing mask, most of the inspired volume is drawn from the reservoir, avoiding dilution of the FIO2 by entrainment of room air. High-Flow Systems The most commonly used high-flow oxygen delivery device is the Venturi mask, which utilizes a specially designed mask insert to reliably entrain room air in a fixed ratio and thus provides a relatively constant FIO2 at relatively high flow rates. Typically, each insert is designed to operate at a specific oxygen flow rate, and different inserts are required to change the FIO2. Lower delivered FIO2 values use greater entrainment ratios, resulting in higher total (oxygen plus entrained air) flows

to the patient, ranging from 80 liters/minute for 24% FIO2 to 40 liters/minute at 50% FIO2 . While these flow rates are much higher than those obtained with low-flow devices, they still may be lower than the peak inspiratory flows for patients in respiratory distress, and thus the actual delivered oxygen concentration may be lower than the nominal value. Oxygen nebulizers, another type of Venturi device, provide patients with humidified oxygen at 35% to 100% FIO2 at high flow rates. Finally, oxygen blenders provide high inspired oxygen concentrations at very high flow rates. These devices mix high-pressure, compressed air and oxygen to achieve any concentration of oxygen from 21% to 100% at flow rates up to 100 liters/minute. These same blenders are used to provide control of FIO2 for ventilators, CPAP/BiPAP machines, oxygenators, and other devices with similar requirements. Again, despite the high flows, the delivery of high FIO2 to an individual patient also depends on maintaining a tight-fitting seal to the airway and/or the use of reservoirs to minimize entrainment of diluting room air. Monitoring of Oxygenation Monitoring and titration are required to meet the therapeutic goal of oxygen therapy and to avoid complications and side effects. Although cyanosis is a physical finding of substantial clinical importance, it is not an early, sensitive, or reliable index of oxygenation. Cyanosis appears when about 5 g/dl of deoxyhemoglobin is present in arterial blood (Lundsgaard and Van Slyke, 1923), representing an oxygen saturation of about 67% when a normal amount of hemoglobin (15 g/dl) is present. However, when anemia lowers the hemoglobin to 10 g/dl, then cyanosis does not appear until the arterial blood saturation has decreased to 50%. Invasive approaches for monitoring oxygenation include intermittent laboratory analysis of arterial or mixed venous blood gases and placement of intravascular cannulae capable of continuous measurement of oxygen tension. The latter method, which relies on fiberoptic oximetry, is used frequently for the continuous measurement of mixed venous hemoglobin saturation as an index of tissue extraction of oxygen, usually in critically ill patients. Noninvasive monitoring of arterial oxygen saturation now is widely available from transcutaneous pulse oximetry, in which oxygen saturation is measured from the differential absorption of light by oxyhemoglobin and deoxyhemoglobin and the arterial saturation determined from the pulsatile component of this signal. Application is simple and calibration not required. Because pulse oximetry measures hemoglobin saturation and not PO2, it is not sensitive to increases in PO2 that exceed levels required to fully saturate the blood. However, pulse oximetry is very useful for monitoring the adequacy of oxygenation during procedures requiring sedation or anesthesia, rapid evaluation and monitoring of potentially compromised patients, and titrating oxygen therapy in situations where toxicity from oxygen or side effects of excess oxygen are of concern. Complications of Oxygen Therapy Administration of supplemental oxygen is not without potential complications. In addition to the potential to promote absorption atelectasis and depress ventilation, discussed above, high flows of dry oxygen can dry out and irritate mucosal surfaces of the airway and the eyes, as well as decrease mucociliary transport and clearance of secretions. Humidified oxygen thus should be used when therapy of greater than an hour's duration is required. Finally, any oxygen-enriched atmosphere constitutes a fire hazard, and appropriate precautions must be taken both in the operating room and for patients on oxygen at home. It is important to realize that hypoxemia still can occur despite the administration of supplemental oxygen. Furthermore, when supplemental oxygen is administered, desaturation occurs at a later time after airway obstruction or hypoventilation, potentially delaying the detection of these critical

events. Therefore, whether or not oxygen is administered to a patient at risk for these problems, it is essential that both oxygen saturation and adequacy of ventilation be frequently assessed. Therapeutic Uses of Oxygen Correction of Hypoxia As stated above, the primary therapeutic use of oxygen is to correct hypoxia. However, hypoxia is most commonly a manifestation of an underlying disease, and administration of oxygen thus can be viewed as a symptomatic or temporizing therapy. Only rarely is hypoxia due to a primary deficiency in the inspired gas. Because of the many causes of hypoxia, supplementation of the inspired gas alone will not suffice to correct the problem. Efforts must be directed at correcting the cause of the hypoxia. For example, airway obstruction is unlikely to respond to an increase in inspired oxygen tension without relief of the obstruction. More importantly, while hypoxemia due to hypoventilation after a narcotic overdose can be improved with supplemental oxygen administration, the patient remains at risk for respiratory embarrassment if ventilation is not increased through stimulation, narcotic reversal, or mechanical ventilation. In general, the hypoxia that results from most pulmonary diseases can be alleviated at least partially by administration of oxygen, thereby allowing time for definitive therapy to reverse the primary process. Thus, administration of oxygen is a basic and important treatment to be used in all forms of hypoxia, with the understanding that the response will vary in a way that is generally predictable from knowledge of the underlying pathophysiological processes. Reduction of Partial Pressure of an Inert Gas. Since nitrogen constitutes some 79% of ambient air, it also is the predominant gas in most gas-filled spaces in the body. In certain situations, such as bowel distension from obstruction or ileus, intravascular air embolism, or pneumothorax, it is desirable to reduce the volume of these air-filled spaces. Since nitrogen is relatively insoluble, inhalation of high concentrations of oxygen (and thus low concentrations of nitrogen) rapidly lowers total body partial pressure of nitrogen and provides a substantial gradient for the removal of nitrogen from gas spaces. Administration of oxygen for air embolism is additionally beneficial, because it also helps to relieve the localized hypoxia distal to the embolic vascular obstruction. In the case of decompression sickness or bends, lowering of inert gas tension in blood and tissues by oxygen inhalation prior to or during a barometric decompression can reduce the degree of supersaturation that occurs after decompression so that bubbles do not form. If bubbles do form in either tissues or the vasculature, administration of oxygen is based on the same rationale as that described for gas embolism. Hyperbaric Oxygen Therapy Oxygen is administered at greater than atmospheric pressure for a number of conditions when 100% oxygen at a single atmosphere is insufficient (Buras, 2000; Shank and Muth, 2000; Myers, 2000). To achieve concentrations of greater than 1 atmosphere, a hyperbaric chamber must be used. These chambers range from small, single-person affairs to multiroom establishments, which may include complex medical equipment. Smaller, one-person chambers typically are pressurized with oxygen, while larger ones are filled with air. In the latter case, a patient must wear a mask to receive the oxygen at the increased pressure. The larger chambers are more suitable for critically ill patients who require ventilation, monitoring, and constant attendance. Any chamber must be built to withstand pressures which may range from 200 to 600 kPa (2 to 6 atmospheres), though inhaled oxygen tension that exceeds 300 kPa (3 atmospheres) rarely is used (see Oxygen Toxicity, below). Hyperbaric oxygen therapy has two components: increased hydrostatic pressure and increased

oxygen pressure. Both factors are necessary for the treatment of decompression sickness and air embolism. The hydrostatic pressure reduces bubble volume, and the absence of inspired nitrogen increases the gradient for elimination of nitrogen and reduces hypoxia in downstream tissues. Increased oxygen pressure at the tissue level is the primary therapeutic goal for most of the other indications for hyperbaric oxygen. For example, even a small increase in PO2 in previously ischemic areas may enhance the bactericidal activity of leukocytes and increase angiogenesis. Thus, repetitive, brief exposure to hyperbaric oxygen is a useful adjunct in the treatment of chronic refractory osteomyelitis, osteoradionecrosis, or crush injury or for the recovery of compromised skin, tissue grafts, or flaps. Furthermore, increased oxygen tension can itself be bacteriostatic; the spread of infection with Clostridium perfringens and production of toxin by the bacteria are slowed when oxygen tensions exceed 33 kPa (250 mm Hg), justifying the early use of hyperbaric oxygen in clostridial myonecrosis (gas gangrene). Hyperbaric oxygen also is useful in selected instances of generalized hypoxia. In carbon monoxide poisoning, hemoglobin and myoglobin become unavailable for oxygen binding because of the high affinity of CO for these proteins. A high PO2 facilitates competition of oxygen with CO for binding sites, permitting the resumption of normal delivery of oxygen to the tissues. Hyperbaric oxygen decreases the incidence of neurological sequelae after CO intoxication; this effect may be independent of the ability of hyperbaric oxygen to speed the elimination of CO (Thom, 1989). However, a recent randomized study suggests that hyperbaric oxygen is not beneficial in carbon monoxide poisoning and might even be harmful (Scheinkestel et al., 1999). The occasional use of hyperbaric oxygen in cyanide poisoning has a similar rationale. Hyperbaric oxygen also may be useful in severe, short-term anemia, since sufficient oxygen can be dissolved in the plasma at 3 atmospheres to meet metabolic needs. However, such treatment must be limited, because oxygen toxicity is dependent on increased PO2, not on the oxygen content of the blood. Hyperbaric oxygen therapy also has been used in such diverse conditions as multiple sclerosis, traumatic spinal cord injury, cerebrovascular accidents, bone grafts and fractures, and leprosy. However, data from well-controlled clinical trials are not sufficient to justify these uses, and the costs of hyperbaric therapy remain very high. Oxygen Toxicity Oxygen is used in cellular energy production and is crucial for cellular metabolism. However, oxygen also may have deleterious actions at the cellular level. Oxygen toxicity probably results from increased production of reactive agents such as superoxide anion, singlet oxygen, hydroxyl radical, and hydrogen peroxide (Turrens et al., 1982). These agents attack and damage biological membranes, and thus eventually result in damage to most cellular components. A variety of factors limit the toxicity of oxygen-derived, reactive agents. These factors include enzymes such as superoxide dismutase, glutathione peroxidase, and catalase, which scavenge toxic byproducts. In addition, there are reducing agents including iron, glutathione, and ascorbate. These factors, however, are insufficient to limit the destructive actions of oxygen when patients are exposed to high concentrations over a period of time. Tissues show differential sensitivity to oxygen toxicity, which is likely the result of differences in both their production of reactive compounds and protective mechanisms. Oxygen toxicity recently was reviewed by Carraway and Piantadosi (1999). Respiratory Tract The pulmonary system is usually the first to exhibit toxicity, a function of its being continuously exposed to the highest oxygen tensions in the body. Subtle changes in pulmonary function can occur after as little as 8 to 12 hours of exposure to 100% oxygen (Sackner et al., 1975). Increases in

capillary permeability, which will increase the alveolar/arterial O2 gradient and ultimately lead to further hypoxemia, and decreased pulmonary function can be seen after only 18 hours of exposure (Davis et al., 1983; Clark, 1988). Serious injury and death, however, require much longer exposures. Pulmonary damage is directly related to the inspired oxygen tension, and concentrations of less than 0.5 atmosphere appear safe over long time periods. The capillary endothelium is the most sensitive tissue of the lung. Endothelial injury results in loss of surface area from interstitial edema and leaks into the alveoli (Crapo et al., 1980). Decreases of inspired oxygen concentrations remain the cornerstone of therapy for oxygen toxicity. Modest decreases in toxicity have been observed in animals treated with antioxidant enzymes (White et al., 1989). Tolerance also may play a role in protection from oxygen toxicity; animals exposed briefly to high oxygen tension are subsequently more resistant to toxicity (Kravetz et al., 1980; Coursin et al., 1987). Sensitivity in human beings also can be altered by preexposure to both high and low oxygen concentrations (Hendricks et al., 1977; Clark, 1988). These studies strongly suggest that changes in alveolar surfactant and cellular levels of antioxidant enzymes play a role in protection from oxygen toxicity. Retina Retrolental fibroplasia can occur when neonates are exposed to increased oxygen tensions (Betts et al., 1977). These changes can go on to cause blindness and are likely caused by angiogenesis (Kushner et al., 1977; Ashton, 1979). Incidence of this disorder has decreased with an improved appreciation of the issues and avoidance of excessive inspired oxygen concentrations. Adults do not seem to develop the disease. Central Nervous System CNS problems are rare, and toxicity occurs only under hyperbaric conditions where exposure exceeds 200 kPa (2 atmospheres). Symptoms include seizures and visual changes, which resolve when oxygen tensions are returned to normal. These problems are a further reason to replace oxygen with helium under hyperbaric conditions (see Helium).

Carbon Dioxide Transfer and Elimination of Carbon Dioxide Carbon dioxide is produced by the body's metabolism at approximately the same rate as oxygen consumption. At rest, this value is about 3 ml/kg per minute, but it may increase dramatically with heavy exercise. Carbon dioxide diffuses readily from the cells into the bloodstream, where it is carried partly as bicarbonate ion, partly in chemical combination with hemoglobin and plasma proteins, and partly in solution at a partial pressure of about 6 kPa (46 mmHg) in mixed venous blood. CO2 is transported to the lung, where it is normally exhaled at the same rate at which it is produced, leaving a partial pressure of about 5.2 kPa (40 mmHg) in the alveoli and in arterial blood. An increase in PCO2 results in a respiratory acidosis and may be due to decreased ventilation or the inhalation of CO2, while an increase in ventilation results in decreased PCO2 and a respiratory alkalosis. As carbon dioxide is freely diffusible, the changes in blood PCO2 and pH soon are reflected by intracellular changes in PCO2 and pH. Effects of Carbon Dioxide

Alterations in PCO2 and pH have widespread effects in the body, particularly on respiration, circulation, and the CNS. More complete discussions of these and other effects are found in textbooks of physiology (see Nunn, 1993a). Respiration Carbon dioxide is a rapid, potent stimulus to ventilation in direct proportion to the inspired CO2. Inhalation of 10% carbon dioxide can produce minute volumes of 75 liters per minute in normal individuals. Carbon dioxide acts at multiple sites to stimulate ventilation. Respiratory integration areas in the brainstem are acted upon by impulses from medullary and peripheral arterial chemoreceptors. The mechanism by which carbon dioxide acts on these receptors probably involves changes in pH (Nattie, 1999; Drysdale et al., 1981). Elevated PCO2 causes bronchodilation, whereas hypocarbia causes constriction of airway smooth muscle; these responses may play a role in matching pulmonary ventilation and perfusion (Duane et al., 1979). Circulation The circulatory effects of carbon dioxide result from the combination of its direct local effects and its centrally mediated effects on the autonomic nervous system. The direct effect of carbon dioxide on the heart, diminished contractility, results from pH changes (van den Bos et al., 1979). The direct effect on systemic blood vessels results in vasodilation. Carbon dioxide causes widespread activation of the sympathetic nervous system and an increase in the plasma concentrations of epinephrine, norepinephrine, angiotensin, and other vasoactive peptides (Staszewska-Barczak and Dusting, 1981). The results of sympathetic nervous system activation are, in general, opposite to the local effects of carbon dioxide. The sympathetic effects consist of increases in cardiac contractility, heart rate, and vasoconstriction (see Chapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists). The balance of opposing local and sympathetic effects, therefore, determines the total circulatory response to carbon dioxide. The net effect of carbon dioxide inhalation is an increase in cardiac output, heart rate, and blood pressure. In blood vessels, however, the direct vasodilating actions of CO2 appear more important and total peripheral resistance decreases when the P CO2 is increased. Carbon dioxide also is a potent coronary vasodilator (Ely et al., 1982). Cardiac arrhythmias associated with increased PCO2 are due to the release of catecholamines. Hypocarbia results in opposite effects: decreased blood pressure and vasoconstriction in skin, intestine, brain, kidney, and heart. These actions are exploited clinically in the use of hyperventilation in the presence of intracranial hypertension. Central Nervous System Hypercarbia depresses the excitability of the cerebral cortex and increases the cutaneous pain threshold through a central action. This central depression has therapeutic importance. For example, in patients hypoventilating from narcotics or anesthetics, increasing PCO2 may result in further CNS depression, which in turn may worsen the respiratory depression. This positive feedback cycle can be deadly. The inhalation of high concentrations of carbon dioxide (about 50%) produces marked cortical and subcortical depression of a type similar to that produced by anesthetic agents. Under certain circumstances inspired CO2 (25% to 30%) can result in subcortical activation and seizures. Methods of Administration

Carbon dioxide is marketed in gray metal cylinders as the pure gas or as carbon dioxide mixed with oxygen. It usually is administered at a concentration of 5% to 10% in combination with oxygen by means of a facemask. Another method for the temporary administration of carbon dioxide is by rebreathing, for example from an anesthesia breathing circuit when the soda lime canister is bypassed or from something as simple as a paper bag. A potential safety issue exists in that CO2 tanks containing oxygen are the same color as those that are 100% CO2. When tanks containing oxygen have been used inadvertently where a fire hazard exists (e.g., in the presence of electrocautery during laparoscopic surgery), explosions and fires have resulted. Therapeutic Uses Inhalation of carbon dioxide is used less commonly today than in the past because there are now more effective treatments for most indications. Inhalation of carbon dioxide has been used during anesthesia to increase the speed of induction and emergence from inhalational anesthesia by increasing minute ventilation and cerebral blood flow. However, this technique results in some degree of respiratory acidosis. Hypocarbia with its attendant respiratory alkalosis still has some uses in anesthesia. It constricts cerebral vessels, decreasing brain size slightly, and thus may facilitate the performance of neurosurgical operations. Although carbon dioxide stimulates respiration, it is not useful in situations where respiratory depression has resulted in hypercarbia or acidosis, since further depression results. A common use of CO2 is for insufflation during endoscopic procedures (e.g., laparoscopic surgery), because it is highly soluble and does not support combustion. Any inadvertent gas emboli are thus more easily dissolved and eliminated via the respiratory system. Recently CO2 has been utilized during open cardiac surgery, where it is used to flood the surgical field. Because of its density, CO2 displaces the air surrounding the open heart so that any gas bubbles trapped in the heart are CO2 rather than insoluble nitrogen (Nadolny and Svensson, 2000). For the same reasons, CO2 is used to debubble cardiopulmonary bypass and extracorporeal membrane oxygenation (ECMO) circuits. It also can be used to adjust pH during bypass procedures when a patient is cooled.

Nitric Oxide Nitric oxide (NO), a free radical gas long known as an air pollutant and a potentially toxic agent, particularly when further oxidized (see below), recently has been shown to be an endogenous cellsignaling molecule of great physiological importance. As knowledge of the important actions of NO have evolved, the use of NO as a therapeutic agent has grown in interest. Endogenous NO is produced from the amino acid L-arginine by a family of enzymes called NO synthases. NO is now recognized as a novel cell messenger implicated in a wide range of physiological and pathophysiological events in numerous cell types and processes, including the cardiovascular, immune, and nervous systems. In the vasculature, NO produced by endothelial cells is a primary determinant of resting vascular tone through basal release and causes vasodilation when synthesized in response to shear stress and to a variety of vasodilating agents. It also plays an active role in inhibiting platelet aggregation and adhesion. Impaired NO production has been implicated in disease states such as atherosclerosis, hypertension, cerebral and coronary vasospasm, and ischemia-reperfusion injury. In the immune system, NO serves as an important effector of macrophage-induced cytotoxicity, and its overproduction is an important mediator of inflammatory states. In neurons, NO serves multiple functions, acting as a mediator of long-term potentiation, of

N-methyl-D-aspartate (NMDA)mediated cytotoxicity, and as the mediator of nonadrenergic noncholingeric neurotransmission; it also has been implicated in mediating central nociceptive pathways. The physiology and pathophysiology of endogenous NO have been extensively reviewed (Moncada and Palmer, 1991; Nathan, 1992; Ignarro et al., 1999). Therapeutic Use of NO Inhalation of NO gas has received considerable therapeutic attention due to its ability to dilate selectively the pulmonary vasculature with minimal systemic cardiovascular effects (Steudel et al., 1999). The lack of effect of inhaled NO on the systemic circulation is due to its strong binding and inactivation by oxyhemoglobin upon exposure to the pulmonary circulation. Ventilation-perfusion matching is preserved or improved by NO, because inhaled NO is distributed only to ventilated areas of the lung and dilates only those vessels directly adjacent to the ventilated alveoli. Thus, inhaled NO will decrease elevated pulmonary artery pressure and pulmonary vascular resistance and often improve oxygenation (Steudel et al., 1999; Haddad et al., 2000). Due to its selective pulmonary vasodilating action, inhaled NO is undergoing intensive study as a potential therapeutic agent for numerous diseases associated with increased pulmonary vascular resistance. Therapeutic trials of inhaled NO in a wide range of such conditions have confirmed its ability to decrease pulmonary vascular resistance and often increase oxygenation, but in all but a few cases these trials have yet to demonstrate long-term improvement in terms of morbidity or mortality (Dellinger, 1999; Cheifetz, 2000). Inhaled NO has been approved by the United States Food and Drug Administration only for use in newborns with persistent pulmonary hypotension. In this disease state, NO inhalation has been shown to reduce significantly the necessity for extracorporeal membrane oxygenation, although overall mortality has been unchanged (Kinsella et al., 1997; Roberts et al., 1997). Notably, numerous trials of inhaled NO in adult and pediatric acute respiratory distress syndrome have failed to demonstrate an impact on outcome (Dellinger, 1999; Cheifetz, 2000). Several small studies and case reports have suggested potential benefits of inhaled NO in a variety of conditions, including weaning from cardiopulmonary bypass in adult and congenital heart disease patients; primary pulmonary hypertension; pulmonary embolism; acute chest syndrome in sickle-cell patients; congenital diaphragmatic hernia; high-altitude pulmonary edema; and lung transplantation (Steudel et al., 1999; Haddad et al., 2000). Larger, prospective, randomized studies, however, have not yet been performed or have failed to confirm any changes in outcome. At the present time, outside of clinical investigation, therapeutic use and benefit of inhaled NO are limited to newborns with persistent pulmonary hypotension. Diagnostic Uses of NO Inhaled NO also is used in several diagnostic applications. Inhaled NO can be used during cardiac catheterization to evaluate safely and selectively the pulmonary vasodilating capacity of patients with heart failure and infants with congenital heart disease. Inhaled NO also is used to determine the diffusion capacity (DL ) across the alveolar-capillary unit. NO is more effective than carbon dioxide in this regard because of its greater affinity for hemoglobin and its higher water solubility at body temperature (Steudel et al., 1999; Haddad et al., 2000). NO is produced from the nasal passages and from the lungs of normal human subjects and can be detected in exhaled gas. The measurement of exhaled NO has been investigated for its utility in assessment of respiratory tract diseases. Measurement of exhaled NO may prove to be useful in diagnosis of asthma and in respiratory tract infections (Haddad et al., 2000).

Toxicity of NO Administered at low concentrations (0.1 to 50 parts per million), inhaled NO appears to be safe and without significant side effects. Pulmonary toxicity can occur with levels higher than 50 to 100 ppm. In the context of NO as an atmospheric pollutant, the Occupational Safety and Health Administration places the seven-hour exposure limit at 50 ppm. Part of the toxicity of NO may be related to its further oxidation to nitrogen dioxide (NO2) in the presence of high concentrations of oxygen. Even low concentrations of NO2 (2 ppm) have been shown to be highly toxic in animal models, with observed changes in lung histopathology, including loss of cilia, hypertrophy, and focal hyperplasia in the epithelium of terminal bronchioles. It is important, therefore, to keep NO 2 formation during NO therapy at a low level. This can be achieved through appropriate filters and scavengers and the use of high-quality gas mixtures. Laboratory studies have suggested potential additional toxic effects of chronic low doses of inhaled NO, including surfactant inactivation and the formation of peroxynitrite by interaction with superoxide. The ability of NO to inhibit or alter the function of a number of iron- and heme-containing proteinsincluding cyclooxygenase, lipoxygenases, and oxidative cytochromesas well as its interactions with ADP-ribosylation suggest a need for further investigation of the toxic potential of NO under therapeutic conditions (Steudel et al., 1999; Haddad et al., 2000). The development of methemoglobinemia is a significant complication of inhaled NO at higher concentrations, and rare deaths have been reported with overdoses of NO. The blood content of methemoglobin, however, generally will not increase to toxic levels with appropriate use of inhaled NO. Methemoglobin concentrations should be intermittently monitored during NO inhalation (Steudel et al., 1999; Haddad et al., 2000). Inhaled NO can inhibit platelet function and has been shown to increase bleeding time in some clinical studies, although reports of bleeding complications are not apparent in the literature. In patients with impaired function of the left ventricle, NO has a potential to further impair left ventricular performance by dilating the pulmonary circulation and increasing the blood flow to the left ventricle, thereby increasing left atrial pressure and promoting pulmonary edema formation. Careful monitoring of cardiac output, left atrial pressure or pulmonary capillary wedge pressure is important in this situation (Steudel et al., 1999). Despite these concerns, there are limited reports of inhaled NO-related toxicity in humans. The most important requirements for safe NO inhalation therapy are outlined by Steudel et al. (1999) and include: (1) continuous measurement of NO and NO2 concentrations using either chemiluminescence or electrochemical analyzers; (2) frequent calibration of monitoring equipment; (3) intermittent analysis of blood methemoglobin levels; (4) the use of certified tanks of NO; and (5) administration of the lowest NO concentration required for therapeutic effect. Methods of Administration Courses of treatment of patients with inhaled NO are highly varied, extending from 0.1 to 40 ppm in dose and for periods of a few hours to several weeks in duration. The minimum effective inhaled NO concentration should be determined for each patient to minimize the chance for toxicity. Commercial NO systems are available that will accurately deliver inspired NO concentrations between 0.1 and 80 ppm and simultaneously measure NO and NO2 concentrations. A constant inspired concentration of NO is obtained by administering NO in nitrogen to the inspiratory limb of the ventilator circuit in either a pulse or continuous mode. While inhaled NO may be administered to spontaneously breathing patients via a closely fitting mask, it usually is delivered during

mechanical ventilation. Nasal prong administration is being employed in therapeutic trials of home administration for treatment of primary pulmonary hypertension (Steudel et al., 1999; Haddad et al., 2000). Acute discontinuation of NO inhalation can lead to a rebound pulmonary artery hypertension with an increase in right-to-left intrapulmonary shunting and a decrease in oxygenation. To avoid this phenomenon, a graded decrease of inhaled NO concentration is important in the process of weaning a patient from inhaled NO (Steudel et al., 1999; Haddad et al., 2000).

Helium Helium is an inert gas whose low density, low solubility, and high thermal conductivity provide the basis for its medical and diagnostic use. Helium is produced by separation from liquefied natural gas and supplied in brown cylinders. Helium can be mixed with oxygen and administered by mask or tracheal tube. Under hyperbaric conditions, it can be substituted for the bulk of other gases, resulting in a mixture of much lower density that is easier to breathe. The primary uses of helium are in pulmonary function testing, the treatment of respiratory obstruction, during laser airway surgery, for diving at depth, and most recently, as a label in imaging studies. The determinations of residual lung volume, functional residual capacity, and related lung volumes require a highly diffusible, nontoxic gas that is insoluble (and thus does not leave the lung via the bloodstream), so that, by dilution, the lung volume can be measured. Helium is well suited to these needs and is much cheaper than alternatives. In these tests, a breath of a known concentration of helium is given and the concentration of helium then measured in the mixed expired gas, allowing calculation of the other pulmonary volumes. Pulmonary gas flow is normally laminar, but with increased flow rate or narrowed flow pathway a component becomes turbulent. Helium can be added to oxygen to treat this turbulence due to airway obstruction. The density of helium is substantially less than that of air, and flow rates under turbulent conditions are increased with lower-density gases. This results in decreased work of breathing with mixtures of helium and oxygen. However, several factors limit the usefulness of this approach. Oxygenation frequently is a principal problem in airway obstruction, and the need for increased inspired oxygen concentration may limit the amount of helium that may be used. Furthermore, the viscosity of helium is higher than that of air, and increased viscosity reduces laminar flow. Helium has high thermal conductivity, which makes it useful during laser surgery on the airway. This more rapid conduction of heat away from the point of contact of the laser beam reduces the spread of tissue damage and the likelihood that the ignition point of flammable materials in the airway will be reached. Its low density improves the flow through the small endotracheal tubes typically used in such procedures. Recently, laser-polarized helium has been used as an inhalational contrast agent for pulmonary magnetic resonance imaging. Optical pumping of nonradioactive helium increases the signal from the gas in the lung sufficiently to permit detailed imaging of the airways and inspired airflow patterns (Kauczor et al., 1998). Hyperbaric Applications The depth and duration of diving activity are limited by oxygen toxicity, inert gas (nitrogen)

narcosis, and nitrogen supersaturation when decompressing. Oxygen toxicity is a problem with prolonged exposure to compressed air at 500 kPa (5 atmospheres) or more. This problem can be minimized by dilution of oxygen with helium, which lacks narcotic potential even at very high pressures and is quite insoluble in body tissues. This low solubility reduces the likelihood of bubble formation after decompression, which can therefore be achieved more rapidly. The low density of helium also reduces the work of breathing in the otherwise dense hyperbaric atmosphere. The lower heat capacity of helium also decreases respiratory heat loss, which can be significant when diving at depth. Acknowledgment The authors wish to acknowledge Drs. Roderic G. Eckenhoff and David E. Longnecker, authors of this chapter in the ninth edition of Goodman and Gilman's The Pharmacological Basis of Therapeutics, some of whose text we have retained in this edition.

Chapter 17. Hypnotics and Sedatives


Overview A wide variety of agents have the capacity to depress the function of the central nervous system (CNS) such that calming or drowsiness (sedation) is produced. Older sedative-hypnotic drugs depress the CNS in a dose-dependent fashion, progressively producing sedation, sleep, unconsciousness, surgical anesthesia, coma, and, ultimately, fatal depression of respiration and cardiovascular regulation. The CNS depressants that are addressed in this chapter include the benzodiazepines and barbiturates as well as sedative-hypnotic agents of diverse chemical structure (e.g., paraldehyde, chloral hydrate). Volatile anesthetics are discussed in Chapter 14: General Anesthetics. Benzodiazepines have only a limited capacity to produce profound and potentially fatal CNS depression. Although coma may be produced at very high doses, benzodiazepines cannot induce a state of surgical anesthesia by themselves and virtually are incapable of causing fatal respiratory depression or cardiovascular collapse unless other CNS depressants also are present. Because of this measure of safety, benzodiazepines and their newer analogs have largely replaced older agents for the treatment of insomnia or anxiety. Sedative-hypnotic drugs, particularly the benzodiazepines, also are used to produce sedation and amnesia before or during diagnostic or operative procedures, and some, notably certain barbiturates, are used at high doses to induce or maintain surgical anesthesia (see Chapter 14: General Anesthetics). A few barbiturates and benzodiazepines are used as antiepileptic agents (see Chapter 21: Drugs Effective in the Therapy of the Epilepsies), and a few benzodiazepines may be used as muscle relaxants (see Chapter 22: Treatment of Central Nervous System Degenerative Disorders). The role of the benzodiazepines and other agents in the pharmacotherapy of anxiety will be discussed in Chapter 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders. CNS depressants also include the aliphatic alcohols, particularly ethanol. Ethanol shares many pharmacological properties with the nonbenzodiazepine sedative-hypnotic drugs. However, its usefulness in the treatment of sleep disorders is limited, and often it may be more disruptive than beneficial. The pharmacology of ethanol is discussed in Chapter 18: Ethanol. Abuse of ethanol and

other CNS depressants is discussed in Chapter 24: Drug Addiction and Drug Abuse. Hypnotics and Sedatives: Introduction A sedative drug decreases activity, moderates excitement, and calms the recipient, whereas a hypnotic drug produces drowsiness and facilitates the onset and maintenance of a state of sleep that resembles natural sleep in its electroencephalographic characteristics and from which the recipient can be aroused easily. The latter effect sometimes is called hypnosis, but the sleep induced by hypnotic drugs does not resemble the artificially induced passive state of suggestibility also called hypnosis. The nonbenzodiazepine sedative-hypnotic drugs belong to a group of agents that depress the central nervous system (CNS) in a dose-dependent fashion, progressively producing calming or drowsiness (sedation), sleep (pharmacological hypnosis), unconsciousness, coma, surgical anesthesia, and fatal depression of respiration and cardiovascular regulation. They share these properties with a large number of chemicals, including general anesthetics (seeChapter 14: General Anesthetics) and aliphatic alcohols, most notably ethanol (seeChapter 18: Ethanol). Only two landmarks on the continuum of CNS depression produced by increasing concentrations of these agents can be defined with a reasonable degree of precision: surgical anesthesia, a state in which painful stimuli elicit no behavioral or autonomic response (seeChapter 13: History and Principles of Anesthesiology), and death, resulting from sufficient depression of medullary neurons to disrupt coordination of cardiovascular function and respiration. The "end points" at lower concentrations of CNS depressants are defined with less precisionin terms of deficits in cognitive function (including attention to environmental stimuli) or motor skills (e.g., ataxia), or of the intensity of sensory stimuli needed to elicit some reflex or behavioral response. Other important indices of decreased activity of the CNS, such as analgesia and seizure suppression, do not necessarily fall along this continuum; they may not be present at subanesthetic concentrations of a CNS-depressant drug (e.g., a barbiturate), or they may be achieved with minimal sedation or other evidence of CNS depression (e.g., with low doses of opioids, phenytoin, ethosuximide). Sedation is a side effect of many drugs that are not general CNS depressants (e.g., antihistamines, neuroleptics). Although such agents can intensify the effects of CNS depressants, they usually produce more specific therapeutic effects at concentrations far lower than those causing substantial CNS depression. They cannot, for example, induce surgical anesthesia in the absence of other agents. The benzodiazepine sedative-hypnotics resemble such agents; although coma may occur at very high doses, neither surgical anesthesia nor fatal intoxication is produced by benzodiazepines in the absence of other drugs with CNS-depressant actions. Moreover, certain congeners can specifically antagonize the actions of benzodiazepines without eliciting significant effects in their absence. This constellation of properties sets the benzodiazepines apart from other sedativehypnotic drugs and imparts a measure of safety that has resulted in benzodiazepines largely displacing older agents for the treatment of insomnia and anxiety. History Since antiquity, alcoholic beverages and potions containing laudanum and various herbals have been used to induce sleep. The first agent to be introduced specifically as a sedative and soon thereafter as a hypnotic was bromide, in the middle of the nineteenth century. Chloral hydrate, paraldehyde, urethane, and sulfonal came into use before the introduction of barbital in 1903 and phenobarbital in 1912. Their success spawned the synthesis and testing of over 2500 barbiturates, of which approximately 50 were distributed commercially. The barbiturates so dominated the stage

that less than a dozen other sedative-hypnotics were successfully marketed before 1960. The partial separation of sedative-hypnotic-anesthetic from anticonvulsant properties embodied in phenobarbital led to searches for agents with more selective effects on the functions of the CNS. As a result, relatively nonsedative anticonvulsants, notably phenytoin and trimethadione, were developed in the late 1930s and early 1940s (seeChapter 21: Drugs Effective in the Therapy of the Epilepsies). The advent of chlorpromazine and meprobamate in the early 1950s, with their taming effects in animals, and the development of increasingly sophisticated methods for evaluating the behavioral effects of drugs set the stage in the 1950s for the synthesis of chlordiazepoxide by Sternbach and the discovery of its unique pattern of actions by Randall (seeSymposium, 1982). The introduction of chlordiazepoxide into clinical medicine in 1961 ushered in the era of benzodiazepines. Most of the benzodiazepines that have reached the marketplace were selected for high anxiolytic potency in relation to their depression of CNS function. However, all benzodiazepines possess sedative-hypnotic properties to varying degrees; these properties are extensively exploited clinically, especially to facilitate sleep. Mainly because of their remarkably low capacity to produce fatal CNS depression, the benzodiazepines have displaced the barbiturates as sedative-hypnotic agents. Over the past decade, it has become clear that all benzodiazepines in clinical use have the capacity to promote the binding of the major inhibitory neurotransmitter, gamma-aminobutyric acid (GABA), to the GABAA subtype of GABA receptors, which exist as multisubunit, ligand-gated chloride channels. Benzodiazepines enhance the GABA-induced ionic currents through these channels. Pharmacological investigations have provided evidence for heterogeneity among sites of binding and action of benzodiazepines, while biochemical and molecular biological investigations have revealed the numerous varieties of subunits that make up the GABA-gated chloride channels expressed in different neurons. Since receptor subunit composition appears to govern the interaction of various allosteric modulators with these channels, there has been a surge in efforts to find agents displaying a different mixture of benzodiazepine-like properties that may reflect selective actions on one or more subtypes of GABA receptors. One result of these efforts has been the introduction of zolpidem, one of several imidazopyridine compounds that appear to exert sedative-hypnotic actions by interacting with a subset of benzodiazepine binding sites. Zaleplon, a pyrazolopyrimidine, also has specificity for a subset of GABAA receptors. Investigation of compounds in many other chemical classes is in progress. Benzodiazepines Although the benzodiazepines in clinical use exert qualitatively similar effects, important quantitative differences in their pharmacodynamic spectra and pharmacokinetic properties have led to varying patterns of therapeutic application. There is reason to believe that a number of distinct mechanisms of action contribute in varying degrees to the sedative-hypnotic, muscle-relaxant, anxiolytic, and anticonvulsant effects of the benzodiazepines. Recent findings provide evidence that specific subunits of GABAA receptor are responsible for specific pharmacological properties of benzodiazepines. While only the benzodiazepines used primarily for hypnosis will be discussed in detail, this chapter will describe the general properties of the group and the important differences among individual agents (see alsoChapters 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders and 21: Drugs Effective in the Therapy of the Epilepsies). Chemistry The structures of the benzodiazepines in use in the United States are shown in Table 171, as are those of a few related compounds, discussed below. The term benzodiazepine refers to the portion

of the structure composed of a benzene ring (A) fused to a seven-membered diazepine ring (B). However, since all the important benzodiazepines contain a 5-aryl substituent (ring C) and a 1,4diazepine ring, the term has come to mean the 5-aryl-1,4-benzodiazepines. Various modifications in the structure of the ring systems have yielded compounds with similar activities. These include 1,5benzodiazepines (e.g., clobazam) and the replacement of the fused benzene ring (A) with heteroaromatic systems such as thieno (e.g., brotizolam). The chemical nature of substituents at positions 1 to 3 can vary widely and can include triazolo or imidazolo rings fused at positions 1 and 2. Replacement of ring C with a keto function at position 5 and a methyl substituent at position 4 are important structural features of the benzodiazepine antagonist, flumazenil (ROMAZICON; Ro 151788; seeHaefely, 1983). In addition to various benzodiazepine or imidazobenzodiazepine derivatives, a large number of nonbenzodiazepine compounds have been synthesized that compete with classic benzodiazepines or flumazenil for binding at specific sites in the CNS (seeGardner et al., 1993). These include representatives from the -carbolines (containing an indole nucleus fused to a pyridine ring), imidazopyridines (e.g., zolpidem; see below), imidazopyrimidines and imidazoquinolones, and cyclopyrrolones (e.g., zopiclone). Pharmacological Properties Virtually all effects of the benzodiazepines result from actions of these drugs on the CNS. The most prominent of these effects are sedation, hypnosis, decreased anxiety, muscle relaxation, anterograde amnesia, and anticonvulsant activity. Only two effects of these drugs appear to result from actions on peripheral tissues: coronary vasodilation, seen after intravenous administration of therapeutic doses of certain benzodiazepines, and neuromuscular blockade, seen only with very high doses. A variety of benzodiazepine-like effects have been observed in vivo and in vitro and have been classified as full agonistic effects (i.e., faithfully mimicking agents such as diazepam with relatively low fractional occupancy of binding sites) or partial agonistic effects (i.e., producing less intense maximal effects and/or requiring relatively high fractional occupancy compared to agents such as diazepam). Some compounds produce effects opposite to those of diazepam in the absence of benzodiazepine-like agonists and have been termed inverse agonists; partial inverse agonists also have been recognized. The vast majority of effects of agonists and inverse agonists can be reversed or prevented by the benzodiazepine antagonist flumazenil, which competes with agonists and inverse agonists for binding to the benzodiazepine receptor. In addition, representatives from various classes of compounds behave like flumazenil and act only to block the effects of agonists or inverse agonists. Central Nervous System While benzodiazepines affect activity at all levels of the neuraxis, some structures are affected to a much greater extent than others. The benzodiazepines are not capable of producing the same degrees of neuronal depression as do barbiturates and volatile anesthetics. All of the benzodiazepines have very similar pharmacological profiles. Nevertheless, the drugs differ in selectivity, and the clinical usefulness of individual benzodiazepines thus varies considerably. As the dose of a benzodiazepine is increased, sedation progresses to hypnosis and then to stupor. The clinical literature often refers to the "anesthetic" effects and uses of certain benzodiazepines, but the drugs do not cause a true general anesthesia, since awareness usually persists, and relaxation sufficient to allow surgery cannot be achieved. However, at "preanesthetic" doses, there is amnesia for events subsequent to the administration of the drug; this may create the illusion of previous

anesthesia. The recent discovery of a molecular basis for numerous benzodiazepine receptor subtypes (see below) has provided the rationale for attempts to separate the anxiolytic actions of these drugs from their sedative/hypnotic effects. However, distinguishing between these behaviors remains problematic. Measurements of anxiety and sedation are difficult in human beings, and the validity of animal models for anxiety and sedation is uncertain. The existence of multiple benzodiazepine receptors may partially explain the diversity of pharmacological responses in different species. Animal Models of Anxiety In animal models of anxiety, most attention has been focused on the ability of benzodiazepines to increase locomotor, feeding, or drinking behavior that has been suppressed by novel or aversive stimuli. For such tests, animal behaviors that previously had been rewarded by food or water are periodically punished by an electric shock. The time during which shocks are delivered is signaled by some auditory or visual cue, and untreated animals stop performing almost completely when the cue is perceived. The difference in behavioral responses during the punished and unpunished periods is eliminated by benzodiazepine agonists, usually at doses that do not reduce the rate of unpunished responses or produce other signs of impaired motor function. Similarly, rats placed in an unfamiliar environment exhibit markedly reduced exploratory behavior (neophobia), whereas animals treated with benzodiazepines do not. Opioid analgesics and neuroleptic (antipsychotic) drugs do not increase suppressed behaviors, and phenobarbital and meprobamate usually do so only at doses that also reduce spontaneous or unpunished behaviors or produce ataxia. The difference between the dose required to impair motor function and that necessary to increase punished behavior varies widely among the benzodiazepines and depends on the species and experimental protocol. Although such differences may have encouraged the marketing of some benzodiazepines as selective sedative-hypnotic agents, they have not predicted with any accuracy the magnitude of sedative effects among those benzodiazepines marketed as anxiolytic agents. Tolerance to Benzodiazepines Studies on tolerance in laboratory animals often are cited to support the belief that disinhibitory effects of benzodiazepines are separate from their sedative-ataxic effects. For example, tolerance to the depressant effects on rewarded or neutral behavior occurs after several days of treatment with benzodiazepines; the disinhibitory effects of the drugs on punished behavior are augmented initially and decline after 3 to 4 weeks (seeFile, 1985). Although most patients who chronically ingest benzodiazepines report that drowsiness wanes over a few days, tolerance to the impairment of some measures of psychomotor performance (e.g., visual tracking) usually is not observed. The development of tolerance to the anxiolytic effects of benzodiazepines is a subject of debate (Lader and File, 1987). However, many patients can maintain themselves on a fairly constant dose; increases or decreases in dosage appear to correspond to changes in problems or stresses. Nevertheless, some patients either do not reduce their dosage when stress is relieved or steadily escalate dosage. Such behavior may be associated with the development of drug dependence (seeWoods et al., 1987; DuPont, 1988). Some benzodiazepines induce muscle hypotonia without interfering with normal locomotion and can decrease rigidity in patients with cerebral palsy. However, in contrast to effects in animals, there is only a limited degree of selectivity in human beings. Clonazepam in nonsedative doses does cause muscle relaxation in patients, but diazepam and most other benzodiazepines do not. Tolerance

occurs to both the muscle relaxant and ataxic effects of these drugs. Experimentally, benzodiazepines inhibit seizure activity induced by either pentylenetetrazol or picrotoxin, but strychnine- and maximal electroshock-induced seizures are suppressed only with doses that also severely impair locomotor activity. Clonazepam, nitrazepam, and nordazepam are among those compounds with more selective anticonvulsant activity than most other benzodiazepines. Benzodiazepines also suppress photic seizures in baboons and ethanol-withdrawal seizures in human beings. However, the development of tolerance to the anticonvulsant effects has limited the usefulness of benzodiazepines in the treatment of recurrent seizure disorders in human beings (seeChapter 21: Drugs Effective in the Therapy of the Epilepsies). Although analgesic effects of benzodiazepines have been observed in experimental animals, only transient analgesia is apparent in human patients after intravenous administration. Such effects actually may involve the production of amnesia. However, it is clear that benzodiazepines do not cause hyperalgesia, unlike the barbiturates. Effects on the Electroencephalogram (EEG) and Sleep Stages The effects of benzodiazepines on the waking EEG resemble those of other sedative-hypnotic drugs. Alpha activity is decreased, but there is an increase in low-voltage fast activity. Tolerance occurs to these effects. Most benzodiazepines decrease sleep latency, especially when first used, and diminish the number of awakenings and the time spent in stage 0 (a stage of wakefulness). Time in stage 1 (descending drowsiness) usually is decreased, and there is a prominent decrease in the time spent in slow-wave sleep (stages 3 and 4). Most benzodiazepines increase the time from onset of spindle sleep to the first burst of rapid-eye-movement (REM) sleep, and the time spent in REM sleep usually is shortened. However, the number of cycles of REM sleep usually is increased, mostly late in the sleep time. Zolpidem does not suppress REM sleep to the same extent as do benzodiazepines and thus may be superior to benzodiazepines for use as a hypnotic (Dujardin et al., 1998). Despite the shortening of stage 4 and REM sleep, the net effect of administration of benzodiazepines typically is an increase in total sleep time, largely because of an increase in time spent in stage 2 (which is the major fraction of non-REM sleep). The effect is greatest in subjects with shortest baseline total sleep time. In addition, despite the increase in the number of REM cycles, the number of shifts to lighter sleep stages (1 and 0) and the amount of body movement are diminished. The nocturnal peaks in the concentrations of growth hormone, prolactin, and luteinizing hormone in plasma are not affected. During chronic nocturnal use of benzodiazepines, the effects on the various stages of sleep usually decline within a few nights. When such use is discontinued, the pattern of drug-induced changes in sleep parameters may "rebound," and an increase in the amount and density of REM sleep may be especially prominent. However, if the dosage has not been excessive, patients usually will note only a shortening of sleep time rather than an exacerbation of insomnia. Although some differences in the patterns of effects exerted by the various benzodiazepines have been noted, their use usually imparts a sense of deep or refreshing sleep. It is uncertain to which effect on sleep parameters this feeling can be attributed. As a result, variations in the pharmacokinetic properties of individual benzodiazepines appear to be much more important determinants of the utility of the available drugs for their effects on sleep than are any potential differences in their pharmacodynamic properties.

Molecular Targets for Benzodiazepine Actions in the CNS Benzodiazepines are believed to exert most of their effects by interacting with inhibitory neurotransmitter receptors directly activated by GABA. GABA receptors are membrane-bound proteins that can be divided into two major subtypes: GABAA and GABAB receptors. The ionotropic GABAA receptors are composed of five subunits that coassemble to form an integral chloride channel. GABAA receptors are responsible for most inhibitory neurotransmission in the CNS. In contrast, the metabotropic GABAB receptors, made up of single peptides with seven transmembrane domains, are coupled to their signal transduction mechanisms by G proteins. Benzodiazepines act at GABAA but not GABA B receptors by binding directly to a specific site that is distinct from that of GABA binding on the receptor/ion channel complex. Unlike barbiturates, benzodiazepines do not directly activate GABAA receptors but require GABA to express their effects; i.e., they only modulate the effects of GABA. Benzodiazepines and GABA analogs bind to their respective sites on brain membranes with nanomolar affinity. Benzodiazepines modulate GABA binding and GABA alters benzodiazepine binding in an allosteric fashion. Benzodiazepinereceptor ligands can act as agonists, antagonists, or inverse agonists at the benzodiazepine receptor site, depending on the compound. Agonists at the benzodiazepine receptor increase, while inverse agonists decrease, the amount of chloride current generated by GABA A-receptor activation. Benzodiazepine receptor agonists produce shifts of GABA concentration-response curves to the left, while inverse agonists shift the curves to the right. Both of these effects can be blocked by antagonists at the benzodiazepine-receptor site. In the absence of a benzodiazepine-receptor agonist or inverse agonist, a benzodiazepine-receptor antagonist does not affect GABAA -receptor function. One such antagonist, flumazenil, is used clinically to reverse the effects of high doses of benzodiazepines. The behavioral and electrophysiological effects of benzodiazepines also can be reduced or prevented by prior treatment with antagonists (e.g., bicuculline) at the GABA binding site. The strongest evidence that benzodiazepines act directly on GABAA receptors comes from molecular cloning of cDNAs encoding subunits of the GABAA receptor complex (Schofield et al., 1987; Pritchett et al., 1989). When receptors formed of the appropriate subunits (see below) are studied in an in vitro expression system, high-affinity benzodiazepine binding sites are seen, as are GABA-activated chloride conductances that are enhanced by benzodiazepine-receptor agonists. The properties of the expressed receptors are generally similar to those of GABAA receptors found in most CNS neurons. Each GABAA receptor is believed to consist of a pentamer of homologous subunits. Thus far, 16 different subunits have been identified and classified into seven subunit families: six , three , three , and single , , , and subunits. Additional complexity arises from RNA splice variants of some of these subunits (e.g., 2 and 6). The exact subunit structures of native GABA receptors remains unknown, but it is thought that most GABA receptors are composed of , , and subunits that coassemble with some uncertain stoichiometry. The multiplicity of subunits generates heterogeneity in GABAA receptors and is responsible, at least in part, for the pharmacological diversity in benzodiazepine receptors detected by behavioral, biochemical, and functional studies. Studies of cloned GABAA receptors have shown that the coassembly of a subunit with and subunits confers benzodiazepine sensitivity to GABAA receptors (Pritchett et al., 1989). Receptors composed solely of and subunits produce functional GABAA receptors that also respond to barbiturates, but they neither bind nor are affected by benzodiazepines. Benzodiazepines are believed to bind at the interface between and subunits, and both subunits determine the pharmacology of the benzodiazepine receptor site (McKernan et al., 1995). For example, combinations containing the 1 subunit show pharmacology distinct from that of receptors containing 2, 3, or 5 subunits (Pritchett and Seeburg, 1990), reminiscent of the pharmacological heterogeneity detected with radioligand binding studies using brain membranes. Receptors containing the 6 subunit do not display high-affinity binding of diazepam and appear to

be selective for the benzodiazepine-receptor inverse agonist RO 15-4513, which has been tested as an alcohol antagonist (Lddens et al., 1990). The subtype of subunit present in receptors also determines benzodiazepine pharmacology, with lower affinity binding observed in receptors containing the 1 subunit (McKernan et al., 1995). Although theoretically hundreds of thousands of different GABAA receptors could be assembled from all these different subunits, there are constraints for the assembly of these receptors that limit their numbers (Sieghart et al., 1999). Recent work is beginning to show which GABAA-receptor subunits are responsible for particular effects of benzodiazepines in vivo. The mutation to arginine of a histidine residue at position 101 of the GABAA receptor 1 subunit renders receptors containing that subunit insensitive to the GABAenhancing effects of diazepam (Kleingoor et al., 1993). Mice bearing these mutated subunits fail to exhibit the sedative, amnestic, and, in part, the anticonvulsant effects of diazepam, while retaining sensitivity to the anxiolytic, muscle-relaxant, and ethanol-enhancing effects (Rudolph et al., 1999; McKernan et al., 2000). Conversely, mice bearing the equivalent mutation in the 2 subunit of the GABAA receptor display insensitivity to the anxiolytic effects of diazepam (Lw, et al., 2000). The attribution of specific behavioral effects of benzodiazepines to individual receptor subunits will aid in the development of new compounds exhibiting fewer undesired side effects. For example, the experimental compound L838,417 enhances the effects of GABA on receptors composed of 2, 3, or 5 subunits but lacks efficacy on receptors containing the 1 subunit; it is thus anxiolytic but not sedating (McKernan et al., 2000). GABAA-receptor subunits also may play roles in the proper targeting of assembled receptors to their proper locations in synapses. The production of 2 subunit knockout mice demonstrated that receptors lacking a 2 subunit were not properly localized to synapses, although receptors lacking these subunits were formed and translocated to cell surfaces (Essrich et al., 1998). The synaptic clustering molecule gephyrin also was found to play a role in receptor localization. GABAA Receptor-Mediated Electrical Events: In Vivo Properties The remarkable safety of the benzodiazepines is likely related to the fact that the production of their effects in vivo depends on the presynaptic release of GABA; in the absence of GABA, benzodiazepines have no effects on GABAA -receptor function. Although barbiturates also enhance the effects of GABA at low doses, they directly activate GABA receptors at higher doses, which can lead to profound CNS depression (see below). Further, the ability of benzodiazepines to release suppressed behaviors and to produce sedation can be ascribed in part to potentiation of GABA-ergic pathways that serve to regulate the firing of neurons containing various monoamines (seeChapter 12: Neurotransmission and the Central Nervous System). These neurons are known to promote behavioral arousal and are important mediators of the inhibitory effects of fear and punishment on behavior. Finally, inhibitory effects on muscular hypertonia or the spread of seizure activity can be rationalized by potentiation of inhibitory GABA-ergic circuits at various levels of the neuraxis. In most studies conducted in vivo or in situ, the local or systemic administration of benzodiazepines reduces the spontaneous or evoked electrical activity of major (large) neurons in all regions of the brain and spinal cord. The activity of these neurons is regulated in part by small inhibitory interneurons (predominantly GABA-ergic) arranged in both feedback and feedforward types of circuits. The magnitude of the effects produced by benzodiazepines can vary widely and depends on such factors as the types of inhibitory circuits that are operating, the sources and intensity of excitatory input, and the manner in which experimental manipulations are performed and assessed. For example, feedback circuits often involve powerful inhibitory synapses on the neuronal soma near the axon hillock, which are supplied predominantly by recurrent pathways. The synaptic or exogenous application of GABA to this region increases chloride conductance and can prevent neuronal discharge by shunting electrical currents that would otherwise depolarize the membrane of

the initial segment. Accordingly, benzodiazepines markedly prolong the period following brief activation of recurrent GABA-ergic pathways during which neither spontaneous nor applied excitatory stimuli can evoke neuronal discharge; this effect is reversed by the GABAA -receptor antagonist bicuculline. Molecular Basis for Benzodiazepine Regulation of GABAA Receptor-Mediated Electrical Events Electrophysiological studies in vitro have shown that the enhancement of GABA-induced chloride currents by benzodiazepines results primarily from an increase in the frequency of bursts of openings of chloride channels produced by submaximal amounts of GABA (Twyman et al., 1989). Inhibitory synaptic transmission measured after stimulation of afferent fibers is potentiated by benzodiazepines at therapeutically relevant concentrations. Prolongation of spontaneous miniature inhibitory postsynaptic currents (IPSCs) by benzodiazepines also has been observed. Although sedative barbiturates also enhance such chloride currents, they do so by prolonging the duration of individual channel-opening events. Macroscopic measurements of GABAA receptor-mediated currents indicate that benzodiazepines shift the GABA concentration-response curve to the left without increasing the maximum current evoked with GABA. Taken together with the in vivo data, these findings are consistent with a model in which benzodiazepines exert their major actions by increasing the gain of inhibitory neurotransmission mediated by GABAA receptors. As noted above, certain experimental benzodiazepines and other structurally related compounds act as inverse agonists to reduce GABA-induced chloride currents, promote convulsions, and produce other in vivo effects opposite to those induced by the benzodiazepines in clinical use (seeGardner, 1988; Gardner et al., 1993). A few compounds, most notably flumazenil, can block the effects of both clinically used benzodiazepines and inverse agonists in vitro and in vivo but have no detectable actions by themselves. The conceptual advances brought about by molecular studies have strengthened the hypothesis that benzodiazepines act mainly at GABAA receptors. Moreover, molecular diversity helps clarify many previous observations that appeared to conflict with this hypothesis (for reviews, seeDe Lorey and Olsen, 1992; Doble and Martin, 1992; Sieghart, 1992; Ragan et al., 1993; and Symposium, 1992). Nonetheless, some observations are difficult to reconcile with the hypothesis that all effects of benzodiazepines are mediated via GABAA receptors. Low concentrations of benzodiazepines that are not blocked by bicuculline or picrotoxin induce depressant effects on hippocampal neurons (Polc, 1988). The induction of sleep in rats by benzodiazepines also is insensitive to bicuculline or picrotoxin but is prevented by flumazenil (seeMendelson, 1992). At higher concentrations, corresponding to those producing hypnosis and amnesia during preanesthetic medication (seeChapter 14: General Anesthetics) or those achieved during the treatment of status epilepticus (seeChapter 21: Drugs Effective in the Therapy of the Epilepsies), the actions of the benzodiazepines may involve the participation of a number of other mechanisms. These include inhibition of the uptake of adenosine and the resultant potentiation of the actions of this endogenous neuronal depressant (seePhillis and O'Regan, 1988), as well as the GABA-independent inhibition of Ca2+ currents, Ca2+-dependent release of neurotransmitter, and tetrodotoxin-sensitive Na+ channels (seeMacdonald and McLean, 1986). The macromolecular complex containing GABA-regulated chloride channels also may be a site of action of general anesthetics, ethanol, inhaled drugs of abuse, and certain metabolites of endogenous steroids (Mehta and Ticku, 1999; Beckstead et al., 2000). Among the latter, allopregnanolone (3 hydroxy, 5 -dihydroprogesterone) is of particular interest. This compound, a metabolite of progesterone that can be formed in the brain from precursors in the circulation as well as from those synthesized by glial cells, produces barbiturate-like effects including promotion of GABA-induced chloride currents and enhanced binding of benzodiazepines and GABA-receptor agonists. Like the barbiturates, higher concentrations of the steroid activate chloride currents in the absence of GABA, and its effects do not require the presence of a subunit in GABA A receptors

expressed in transfected cells. Unlike the barbiturates, however, the steroid cannot reduce excitatory responses to glutamate (see below). These effects are produced very rapidly and apparently are mediated by interactions at sites on the cell surface. A congener of allopregnanolone (alfaxalone) previously was used outside the United States for the induction of anesthesia. Respiration Hypnotic doses of benzodiazepines are without effect on respiration in normal subjects, but special care must be taken in the treatment of children (Kriel et al., 2000) and individuals with impaired hepatic function, such as alcoholics (Guglielminotti et al., 1999). At higher doses, such as those used for preanesthetic medication or for endoscopy, benzodiazepines slightly depress alveolar ventilation and cause respiratory acidosis as the result of a decrease in hypoxic rather than hypercapnic drive; these effects are exaggerated in patients with chronic obstructive pulmonary disease (COPD), and alveolar hypoxia and/or CO2 narcosis may result. These drugs can cause apnea during anesthesia or when given with opioids, and patients severely intoxicated with benzodiazepines usually require respiratory assistance only when they also have ingested another CNS-depressant drug, most commonly alcohol. By contrast, hypnotic doses of benzodiazepines may worsen sleep-related breathing disorders by adversely affecting the control of the upper airway muscles or by decreasing the ventilatory response to CO2 (see Guilleminault, in Symposium, 1990b). The latter effect may be sufficient to cause hypoventilation and hypoxemia in some patients with severe COPD, although benzodiazepines may improve sleep and sleep structure in some instances. In patients with obstructive sleep apnea (OSA), hypnotic doses of benzodiazepines may decrease muscular tone in the upper airway and exaggerate the impact of apneic episodes on alveolar hypoxia, pulmonary hypertension, and cardiac ventricular load. Many physicians consider the presence of OSA to be a contraindication for the use of alcohol or any sedative-hypnotic agent, including a benzodiazepine; caution also should be exercised in patients who snore regularly, because partial airway obstruction may be converted to OSA under the influence of these drugs. In addition, benzodiazepines may promote the appearance of episodes of apnea during REM sleep (associated with decreases in oxygen saturation) in patients recovering from a myocardial infarction (Guilleminault, in Symposium, 1990b); however, the potential impact of these drugs on survival of patients with cardiac disease has not been investigated as yet. Cardiovascular System The cardiovascular effects of benzodiazepines are minor in normal subjects except in severe intoxication; the adverse effects in patients with obstructive sleep disorders or cardiac disease were noted above. In preanesthetic doses, all benzodiazepines decrease blood pressure and increase heart rate. With midazolam, the effects appear to be secondary to a decrease in peripheral resistance, but with diazepam they are secondary to a decrease in left ventricular work and cardiac output. Diazepam increases coronary flow, possibly by an action to increase interstitial concentrations of adenosine, and the accumulation of this cardiodepressant metabolite also may explain the negative inotropic effects of the drug. In large doses, midazolam decreases considerably both cerebral blood flow and oxygen assimilation (Nugent et al., 1982). Gastrointestinal Tract Benzodiazepines are thought by some gastroenterologists to improve a variety of "anxiety-related" gastrointestinal disorders. There is a paucity of evidence for direct actions. Benzodiazepines partially protect against stress ulcers in rats, and diazepam markedly decreases nocturnal gastric

secretion in human beings. Absorption, Fate, and Excretion The physicochemical and pharmacokinetic properties of the benzodiazepines greatly affect their clinical utility. They all have high lipid: water distribution coefficients in the nonionized form; nevertheless, lipophilicity varies more than 50-fold according to the polarity and electronegativity of various substituents. All of the benzodiazepines essentially are completely absorbed, with the exception of clorazepate; this drug is rapidly decarboxylated in gastric juice to N-desmethyldiazepam (nordazepam), which subsequently is absorbed completely. Some benzodiazepines (e.g., prazepam and flurazepam) reach the systemic circulation only in the form of active metabolites. Drugs active at the benzodiazepine receptor may be divided into four categories based on their elimination half-lives: (1) ultra-short-acting benzodiazepines; (2) short-acting agents, with t1/2 less than 6 hours, including triazolam, the nonbenzodiazepine zolpidem (t1/2 approximately 2 hours), and zopiclone (t1/2 5 to 6 hours); (3) intermediate-acting agents, with t1/2 of 6 to 24 hours, including estazolam and temazepam; and (4) long-acting agents, with t1/2 greater than 24 hours, including flurazepam, diazepam, and quazepam. The benzodiazepines and their active metabolites bind to plasma proteins. The extent of binding correlates strongly with lipid solubility and ranges from about 70% for alprazolam to nearly 99% for diazepam. The concentration in the cerebrospinal fluid (CSF) is approximately equal to the concentration of free drug in plasma. While competition with other protein-bound drugs may occur, no clinically significant examples have been reported. The plasma concentrations of most benzodiazepines exhibit patterns that are consistent with twocompartment models (seeChapter 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination), but three-compartment models appear to be more appropriate for the compounds with the highest lipid solubility. Accordingly, there is rapid uptake of benzodiazepines into the brain and other highly perfused organs after intravenous administration (or oral administration of a rapidly absorbed compound); rapid uptake is followed by a phase of redistribution into tissues that are less well perfused, especially muscle and fat. Redistribution is most rapid for drugs with the highest lipid solubility. In the regimens used for nighttime sedation, the rate of redistribution sometimes can have a greater influence than the rate of biotransformation on the duration of CNS effects (Dettli, in Symposium, 1986a). The kinetics of redistribution of diazepam and other lipophilic benzodiazepines are complicated by enterohepatic circulation. The volumes of distribution of the benzodiazepines are large, and in many cases are increased in elderly patients (Swift and Stevenson, in Symposium, 1983). These drugs cross the placental barrier and are secreted into breast milk. The benzodiazepines are metabolized extensively by enzymes in the cytochrome P450 family, particularly CYP3A4 and CYP2C19. Some benzodiazepines, such as oxazepam, are conjugated directly and are not metabolized by these enzymes (seeTanaka, 1999). Erythromycin, clarithromycin, ritonavir, itraconazole, ketoconazole, nefazodone, and grapefruit juice are inhibitors of CYP3A4 and can affect the metabolism of benzodiazepines (Dresser et al., 2000). Because active metabolites of some benzodiazepines are biotransformed more slowly than are the parent compounds, the duration of action of many benzodiazepines bears little relationship to the half-time of elimination of the drug that has been administered. For example, the half-life of flurazepam in plasma is 2 to 3 hours, but that of a major active metabolite (N-desalkylflurazepam) is 50 hours or

more. Conversely, the rate of biotransformation of those agents that are inactivated by the initial reaction is an important determinant of their duration of action; these agents include oxazepam, lorazepam, temazepam, triazolam, and midazolam. Metabolism of the benzodiazepines occurs in three major stages. These and the relationships between the drugs and their metabolites are shown in Table 172. For those benzodiazepines that bear a substituent at position 1 (or 2) of the diazepine ring, the initial and most rapid phase of metabolism involves modification and/or removal of the substituent. With the exception of triazolam, alprazolam, estazolam, and midazolam, which contain either a fused triazolo or imidazolo ring, the eventual products are N-desalkylated compounds; these are all biologically active. One such compound, nordazepam, is a major metabolite common to the biotransformation of diazepam, clorazepate, prazepam, and halazepam; it also is formed from demoxepam, an important metabolite of chlordiazepoxide. The second phase of metabolism involves hydroxylation at position 3 and also usually yields an active derivative (e.g., oxazepam from nordazepam). The rates of these reactions are usually very much slower than the first stage (half-times greater than 40 to 50 hours), such that appreciable accumulation of hydroxylated products with intact substituents at position 1 does not occur. There are two significant exceptions to this rule: (1) Small amounts of temazepine accumulate during the chronic administration of diazepam (not shown in Table 172) and (2) following the replacement of sulfur with oxygen in quazepam, most of the resultant 2-oxoquazepam is slowly hydroxylated at position 3 without removal of the N-alkyl group. However, only small amounts of the 3-hydroxyl derivative accumulate during the chronic administration of quazepam, because this compound is conjugated at an unusually rapid rate. By contrast, the N-desalkylflurazepam that is formed by the "minor" metabolic pathway does accumulate during quazepam administration, and it contributes significantly to the overall clinical effect. The third major phase of metabolism is the conjugation of the 3-hydroxyl compounds, principally with glucuronic acid; the half-times of these reactions are usually between 6 and 12 hours, and the products are invariably inactive. Conjugation is the only major route of metabolism available for oxazepam and lorazepam, and it is the preferred pathway for temazepam because of the slower conversion of this compound to oxazepam. Triazolam and alprazolam are metabolized principally by initial hydroxylation of the methyl group on the fused triazolo ring; the absence of a chlorine residue in ring C of alprazolam slows this reaction significantly. The products, sometimes referred to as -hydroxylated compounds, are quite active but are metabolized very rapidly, primarily by conjugation with glucuronic acid, such that there is no appreciable accumulation of active metabolites. The fused triazolo ring in estazolam lacks a methyl group, and it is hydroxylated to only a limited extent; the major route of metabolism involves the formation of the 3-hydroxyl derivative. The corresponding hydroxyl derivatives of triazolam and alprazolam also are formed to a significant extent. Compared to compounds without the triazolo ring, the rate of this reaction for all three drugs is unusually swift, and the 3-hydroxyl compounds are rapidly conjugated or oxidized further to benzophenone derivatives before excretion. Midazolam is metabolized rapidly, primarily by hydroxylation of the methyl group on the fused imidazo ring; only small amounts of 3-hydroxyl compounds are formed. The -hydroxylated compound, which has appreciable biological activity, is eliminated with a half-life of 1 hour after conjugation with glucuronic acid. Variable and sometimes substantial accumulation of this metabolite has been noted during intravenous infusion (Oldenhof et al., 1988). The aromatic rings (A and C) of the benzodiazepines are hydroxylated to only a small extent. The only important metabolism at these sites is the reduction of the 7-nitro substituents of clonazepam,

nitrazepam, and flunitrazepam; the half-lives of these reactions are usually 20 to 40 hours. The resulting amines are inactive and are acetylated to varying degrees before excretion. Because the benzodiazepines apparently do not significantly induce the synthesis of hepatic microsomal enzymes, their chronic administration usually does not result in the accelerated metabolism of other substances or of the benzodiazepines. Cimetidine and oral contraceptives inhibit N-dealkylation and 3-hydroxylation of benzodiazepines. Ethanol, isoniazid, and phenytoin are less effective in this regard. These reactions usually are reduced to a greater extent in elderly patients and in patients with chronic liver disease than are those involving conjugation. Ideally, a useful hypnotic agent would have a rapid onset of action when taken at bedtime, a sufficiently sustained action to facilitate sleep throughout the night, and no residual action by the following morning. Among those benzodiazepines that are commonly used as hypnotic agents, triazolam theoretically fits this description most closely. Because of the slow rate of elimination of desalkylflurazepam, flurazepam (or quazepam) might seem to be unsuitable for this purpose. However, in practice there appear to be some disadvantages to the use of agents that have a relatively rapid rate of disappearance; these disadvantages include the early-morning insomnia that is experienced by some patients and a greater likelihood of rebound insomnia upon discontinuance of use (seeGillin et al., 1989; Roehrs et al., in Symposium, 1990b; Roth and Roehrs, 1992). With careful selection of dosage, flurazepam and other benzodiazepines with slower rates of elimination than triazolam can be used effectively (seeVogel, 1992). The biotransformation and pharmacokinetic properties of the benzodiazepines have been reviewed by Greenblatt (1991), Greenblatt and Wright (1993), Greenblatt et al. (1983a,b, 1991), and Hilbert and Battista (1991). Untoward Effects At the time of peak concentration in plasma, hypnotic doses of benzodiazepines can be expected to cause varying degrees of lightheadedness, lassitude, increased reaction time, motor incoordination, impairment of mental and motor functions, confusion, and anterograde amnesia. Cognition appears to be affected less than motor performance. All of these effects can greatly impair driving and other psychomotor skills. Interaction with ethanol may be especially serious. When the drug is given at the intended time of sleep, the persistence of these effects during the waking hours is adverse. These residual effects are clearly dose-related and can be insidious, since most subjects underestimate the degree of their impairment. Residual daytime sleepiness also may be present as an adverse effect, even though successful drug therapy can reduce the daytime sleepiness resulting from chronic insomnia (seeDement, 1991). The intensity and incidence of CNS toxicity generally increase with age; both pharmacokinetic and pharmacodynamic factors are involved (seeMeyer, 1982; Swift et al., in Symposium, 1983; Monane, 1992). Other relatively common side effects of benzodiazepines are weakness, headache, blurred vision, vertigo, nausea and vomiting, epigastric distress, and diarrhea; joint pains, chest pains, and incontinence may occur in a few recipients. Anticonvulsant benzodiazepines sometimes actually increase the frequency of seizures in patients with epilepsy. The possible adverse effects of alterations in the sleep pattern are discussed at the end of this chapter. Adverse Psychological Effects Benzodiazepines may cause paradoxical effects. For example, flurazepam may occasionally increase the incidence of nightmares, especially during the first week of use, and sometimes causes

garrulousness, anxiety, irritability, tachycardia, and sweating. Amnesia, euphoria, restlessness, hallucinations, and hypomanic behavior have been reported to occur during use of various benzodiazepines. The release of bizarre uninhibited behavior has been noted in some users, while hostility and rage may occur in others; collectively, these are sometimes referred to as disinhibition or dyscontrol reactions. Paranoia, depression, and suicidal ideation also occasionally may accompany the use of these agents. The incidence of such paradoxical or disinhibition reactions is rare and appears to be dose-related. Because of reports of an increased incidence of confusion and abnormal behaviors, triazolam has been banned in the United Kingdom. Review by the United States Food and Drug Administration (FDA), however, declared triazolam to be safe and effective in low doses of 0.125 to 0.25 mg. Hindmarch et al. (1993) surveyed British family practitioners who had switched their patients from triazolam to a variety of other hypnotics after the ban in the United Kingdom and found that the patients did not have fewer side effects with replacement treatments. This report is consonant with controlled studies that do not support the conclusion that such reactions occur more frequently with any one benzodiazepine than with others (seeJonas et al., 1992; Rothschild, 1992). Chronic benzodiazepine use poses a risk for development of dependence and abuse, but not to the same extent as seem with older sedatives and other recognized drugs of abuse (Ulemhuth et al., 1999). Abuse of benzodiazepines includes the use of flunitrazepam (ROHYPNOL) as a "date-rape" drug (Woods and Winger, 1997). Mild dependence may develop in many patients who have taken therapeutic doses of benzodiazepines on a regular basis for prolonged periods. Withdrawal symptoms may include temporary intensification of the problems that originally prompted their use (e.g., insomnia, anxiety). Dysphoria, irritability, sweating, unpleasant dreams, tremors, anorexia, and faintness or dizziness also may occur, especially when withdrawal of the benzodiazepine occurs abruptly (Petursson, 1994). Hence, it is prudent to taper the dosage gradually when therapy is to be discontinued. During conventional treatment regimens, very few individuals increase their intake without instructions to do so, and very few manifest compulsive drug-seeking behavior upon discontinuation of a benzodiazepine. Patients who have histories of drug or alcohol abuse are most apt to use these agents inappropriately, and abuse of benzodiazepines usually occurs as part of a pattern of abuse of multiple drugs. In such individuals, benzodiazepines seldom are preferred to barbiturates or even alcohol, but they often are combined with those drugs to either accentuate their effect (e.g., alcohol, opiates) or reduce their toxicity (e.g., cocaine). The use of high doses of benzodiazepines over prolonged periods can lead to more severe symptoms after discontinuing the drug, including agitation, depression, panic, paranoia, myalgia, muscle twitches, and even convulsions and delirium. Dependence on benzodiazepines and their abuse have been reviewed by Woods et al. (1992) and in a report edited by DuPont (1988). In spite of the adverse effects reviewed above, the benzodiazepines are relatively safe drugs. Even huge doses are rarely fatal unless other drugs are taken concomitantly. Ethanol is a common contributor to deaths involving benzodiazepines, and true coma is uncommon in the absence of another CNS depressant. Although overdosage with a benzodiazepine rarely causes severe cardiovascular or respiratory depression, therapeutic doses can further compromise respiration in patients with COPD or obstructive sleep apnea (see discussion of effects in Respiration, above). A wide variety of allergic, hepatotoxic, and hematologic reactions to the benzodiazepines may occur, but the incidence is quite low; these reactions have been associated with the use of flurazepam and triazolam but not with temazepam. Large doses taken just prior to or during labor may cause hypothermia, hypotonia, and mild respiratory depression in the neonate. Abuse by the pregnant mother can result in a withdrawal syndrome in the newborn. Except for additive effects with other sedative or hypnotic drugs, reports of clinically important,

pharmacodynamic interactions between benzodiazepines and other drugs have been infrequent. Ethanol increases both the rate of absorption of benzodiazepines and the associated CNS depression. Valproate and benzodiazepines in combination may cause psychotic episodes. Pharmacokinetic interactions are mentioned above. Therapeutic Uses The therapeutic uses and routes of administration of individual benzodiazepines that currently are marketed in the United States are summarized in Table 173. It should be emphasized that most benzodiazepines can be used interchangeably. For example, diazepam can be used for alcohol withdrawal, and most benzodiazepines work as hypnotics. In general, the therapeutic uses of a given benzodiazepine depend on its half-life and may not match the marketed indications. Benzodiazepines that are useful as anticonvulsants have a long half-life, and rapid entry into the brain is required for efficacy in treatment of status epilepticus. A short elimination half-life is desirable for hypnotics, although it carries the drawback of increased abuse liability and severity of withdrawal after discontinuation of chronic use. Antianxiety agents, in contrast, should have a long half-life, despite the drawback of the risk of neuropsychological deficits caused by drug accumulation. The use of the benzodiazepines as hypnotics and sedatives is discussed later in this chapter (see alsoSymposium, 1990b; Teboul and Chouinard, 1991; Vogel, 1992; Dement, 1992; Walsh and Engelhardt, 1992; Maczaj, 1993). The use of benzodiazepines as antianxiety agents and anticonvulsants is discussed in Chapters 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders and 21: Drugs Effective in the Therapy of the Epilepsies, respectively, and their roles in preanesthetic medication and anesthesia are described in Chapters 13: History and Principles of Anesthesiology and 14: General Anesthetics. The utility of benzodiazepines as muscle relaxants is discussed in Chapter 22: Treatment of Central Nervous System Degenerative Disorders. Novel Benzodiazepine-Receptor Agonists Hypnotics in this class include zolpicone (not available in the United States), zolpidem (AMBIEN), and zaleplon (SONATA). Although the chemical structures of these compounds do not resemble those of benzodiazepines, it is assumed that their therapeutic efficacies are due to their agonist effects on the benzodiazepine receptor. Zaleplon and zolpidem are effective in relieving sleep-onset insomnia. Both drugs have been approved by the FDA for use for up to 7 to 10 days at a time. There is evidence that both zaleplon and zolpidem have sustained hypnotic efficacy, without occurrence of rebound insomnia on abrupt discontinuation (Mitler, 2000; Walsh et al., 2000). Zaleplon and zolpidem have similar degrees of efficacy. Zolpidem has a half-life of about 2 hours, which is sufficient to cover most of a typical 8hour sleep period, and is presently approved for bedtime use only. Zaleplon has a shorter half-life, about 1 hour, which offers up the possibility for safe dosing later in the night, within 4 hours of the anticipated rising time. As a result, zaleplon is approved for use immediately at bedtime or when the patient has difficulty falling asleep after bedtime. Because of its short half-life, zaleplon has not been shown to be different from placebo in measures of duration of sleep and number of awakenings. Zaleplon and zolpidem may differ in residual side effects; late-night administration of zolpidem has been associated with morning sedation, delayed reaction time, and anterograde amnesia, whereas zaleplon has had no more side effects than has placebo. The abuse potential of these drugs appears to be similar to that of benzodiazepines.

Zaleplon Zaleplon (SONATA) is a nonbenzodiazepine and is a member of the pyrazolopyrimidine class of compounds. The structural formula is shown below.

Although its chemical structure is unrelated to that of benzodiazepines, zaleplon preferentially binds to the benzodiazepine receptor site on GABAA receptors containing the 1 subunit of the receptor. Zaleplon is rapidly absorbed and reaches peak plasma concentrations in about one hour. Its half-life is approximately one hour. Its bioavailability is approximately 30% because of presystemic metabolism. Zaleplon has a volume of distribution of approximately 1.4 liters/kg and plasmaprotein binding of approximately 60%. Zaleplon is metabolized largely by aldehyde oxidase and to a lesser extent by CYP3A4. Its oxidative metabolites are converted to glucuronides and eliminated in urine. Less than 1% of zaleplon is excreted unchanged in urine. None of zaleplon's metabolites are pharmacologically active. Zaleplon (usually administered in 5-, 10-, or 20-mg doses) has been studied in clinical trials on patients with chronic or transient insomnia (for a review, seeDooley and Plosker, 2000). Studies have focused on its effects in decreasing sleep latency. Zaleplon-treated subjects with either chronic or transient insomnia have experienced shorter periods of sleep latency than have placebo-treated subjects. Tolerance to zaleplon does not appear to occur, nor do rebound insomnia or withdrawal symptoms after stopping treatment. Zolpidem Zolpidem (AMBIEN) is a nonbenzodiazepine sedative-hypnotic drug that became available in the United States in 1993 after 5 years of use in Europe. It is classified as an imidazopyridine and has the following chemical structure:

Although the actions of zolpidem are due to agonist effects on benzodiazepine receptors and

generally resemble those of benzodiazepines, it produces only weak anticonvulsant effects in experimental animals, and its relatively strong sedative actions appear to mask anxiolytic effects in various animal models of anxiety (seeLangtry and Benfield, 1990). Although the chronic administration of zolpidem to rodents produces neither tolerance to its sedative effects nor signs of withdrawal when the drug is discontinued and flumazenil is injected (Perrault et al., 1992), evidence of tolerance and physical dependence has been observed with chronic administration of zolpidem to baboons (Griffiths et al., 1992). Unlike the benzodiazepines, zolpidem has little effect on the stages of sleep in normal human subjects. The drug is as effective as benzodiazepines in shortening sleep latency and prolonging total sleep time in patients with insomnia. Following discontinuation of zolpidem, the beneficial effects on sleep have been reported to persist for up to one week (Herrmann et al., 1993), but mild rebound insomnia on the first night also has occurred (Anonymous, 1993). The development of tolerance and physical dependence has been seen only very rarely and under unusual circumstances (Cavallaro et al., 1993; Morselli, 1993). Indeed, zolpidem-induced improvement in sleep time of chronic insomniacs was found in one study to be sustained during as much as 6 months of treatment without signs of withdrawal or rebound after stopping the drug (Kummer et al., 1993). Nevertheless, zolpidem currently is approved only for the short-term treatment of insomnia despite the apparently benign consequences of its chronic administration. At therapeutic doses (10 to 20 mg; 5 to 10 mg in elderly patients), zolpidem infrequently produces residual daytime sedation or amnesia, and the incidence of other adverse effects (e.g., gastrointestinal complaints, dizziness) also is low. Like the benzodiazepines, large overdoses of zolpidem do not produce severe respiratory depression unless other agents (e.g., alcohol) also are ingested (Garnier et al., 1994). Hypnotic doses increase the hypoxia and hypercarbia of patients with obstructive sleep apnea. Zolpidem is absorbed readily from the gastrointestinal tract; first-pass hepatic metabolism results in an oral bioavailability of about 70%, but this value is lower when the drug is ingested with food because of slowed absorption and increased hepatic blood flow. Zolpidem is eliminated almost entirely by conversion to inactive products in the liver, largely through oxidation of the methyl groups on the phenyl and imidazopyridine rings to the corresponding carboxylic acids. Its half-life in plasma is approximately 2 hours in individuals with normal hepatic blood flow or function. This value may be increased twofold or more in those with cirrhosis, and it also tends to be greater in older patients; adjustment of dosage often is necessary in both categories of patients. Although little or no unchanged zolpidem is found in the urine, the elimination of the drug is slower in patients with chronic renal insufficiency, largely owing to an increase in its apparent volume of distribution. The properties of zolpidem and its therapeutic utility have been reviewed by Langtry and Benfield (1990) and by Hoehns and Perry (1993). Flumazenil: A Benzodiazepine-Receptor Antagonist Flumazenil (ROMAZICON) is an imidazobenzodiazepine (seeTable 171) that behaves as a specific benzodiazepine antagonist. It is the first such agent to undergo an extensive clinical trial, and it was released for clinical use in 1991. As noted above, flumazenil binds with high affinity to specific sites, where it competitively antagonizes the binding and allosteric effects of benzodiazepines and other ligands. Both the electrophysiological and behavioral effects of agonist or inverse-agonist benzodiazepines or -carbolines also are antagonized. In animal studies, the intrinsic pharmacological actions of flumazenil have been subtle; effects resembling those of inverse agonists sometimes have been detected at low doses, while slight benzodiazepine-like effects often have been evident at high doses. The evidence for intrinsic activity in human subjects is even more vague, except for modest anticonvulsant effects at high doses. However, anticonvulsant effects

cannot be relied upon for therapeutic utility, as the administration of flumazenil may precipitate seizures under certain circumstances (see below). Flumazenil is available only for intravenous administration. Although it is rapidly absorbed after oral administration, less than 25% of the drug reaches the systemic circulation as a result of extensive first-pass hepatic metabolism; effective oral doses are apt to cause headache and dizziness (Roncari et al., 1993). Upon intravenous administration, flumazenil is eliminated almost entirely by hepatic metabolism to inactive products with a half-life of about 1 hour; the duration of clinical effects is thus brief, and they usually persist for only 30 to 60 minutes. The primary indications for the use of flumazenil are the management of suspected benzodiazepine overdose and the reversal of sedative effects produced by benzodiazepines administered during either general anesthesia or diagnostic and/or therapeutic procedures. The administration of a series of small injections is preferred to a single bolus injection. A total of 1 mg of flumazenil given over 1 to 3 minutes usually is sufficient to abolish the effects of therapeutic doses of benzodiazepines; patients with suspected benzodiazepine overdose should respond adequately to a cumulative dose of 1 to 5 mg given over 2 to 10 minutes, and a lack of response to 5 mg of flumazenil strongly suggests that a benzodiazepine is not the major cause of sedation. Additional courses of treatment with flumazenil may be necessary within 20 to 30 minutes should sedation reappear. Flumazenil is not effective in single-drug overdoses with either barbiturates or tricyclic antidepressants. To the contrary, the administration of flumazenil may be associated with the onset of seizures under these circumstances; the risk of seizures is especially high in patients poisoned with tricyclic antidepressants (Spivey, 1992). Seizures or other withdrawal symptoms also may be precipitated in patients who had been taking benzodiazepines for protracted periods and in whom tolerance and/or dependence may have developed. The properties and therapeutic uses of flumazenil have been reviewed by Hoffman and Warren (1993).

Barbiturates The barbiturates enjoyed a long period of extensive use as sedative-hypnotic drugs; however, except for a few specialized uses, they have been largely replaced by the much safer benzodiazepines. A more detailed description of the barbiturates can be found in the fifth edition of this textbook. Chemistry Barbituric acid is 2,4,6-trioxohexahydropyrimidine. This compound lacks central-depressant activity, but the presence of alkyl or aryl groups at position 5 confers sedative-hypnotic and sometimes other activities. The general structural formula for the barbiturates and the structures of selected compounds are included in Table 174. The carbonyl group at position 2 takes on acidic character because of lactamlactim ("keto""enol") tautomerization, which is favored by its location between the two electronegative amido nitrogens. The lactim form is favored in alkaline solution, and salts result. Barbiturates in which the oxygen at C2 is replaced by sulfur are sometimes called thiobarbiturates. These compounds are more lipidsoluble than the corresponding oxybarbiturates. In general, structural changes that increase lipid solubility decrease duration of action, decrease latency to onset of activity, accelerate metabolic degradation, and often increase hypnotic potency.

Pharmacological Properties The barbiturates reversibly depress the activity of all excitable tissues. The CNS is exquisitely sensitive, and, even when barbiturates are given in anesthetic concentrations, direct effects on peripheral excitable tissues are weak. However, serious deficits in cardiovascular and other peripheral functions occur in acute barbiturate intoxication. Central Nervous System The barbiturates can produce all degrees of depression of the CNS, ranging from mild sedation to general anesthesia. The use of barbiturates for general anesthesia is discussed in Chapter 14: General Anesthetics. Certain barbiturates, particularly those containing a 5-phenyl substituent (phenobarbital, mephobarbital), have selective anticonvulsant activity (seeChapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). The antianxiety properties of the barbiturates are not equivalent to those exerted by the benzodiazepines, especially with respect to the degree of sedation that is produced. The barbiturates may have euphoriant effects. Except for the anticonvulsant activities of phenobarbital and its congeners, the barbiturates possess a low degree of selectivity and therapeutic index. Thus, it is not possible to achieve a desired effect without evidence of general depression of the CNS. Pain perception and reaction are relatively unimpaired until the moment of unconsciousness, and in small doses the barbiturates increase the reaction to painful stimuli. Hence, they cannot be relied upon to produce sedation or sleep in the presence of even moderate pain. In some individuals and in some circumstances, such as in the presence of pain, barbiturates cause overt excitement instead of sedation. The fact that such paradoxical excitement occurs with other CNS depressants suggests that it may result from depression of inhibitory centers. Effects on Stages of Sleep Hypnotic doses of barbiturates increase the total sleep time and alter the stages of sleep in a dosedependent manner. Like the benzodiazepines, these drugs decrease sleep latency, the number of awakenings, and the durations of REM and slow-wave sleep. During repetitive nightly administration, some tolerance to the effects on sleep occurs within a few days, and the effect on total sleep time may be reduced by as much as 50% after 2 weeks of use. Discontinuation leads to rebound increases in all the parameters reported to be decreased by barbiturates. Tolerance Both pharmacodynamic (functional) and pharmacokinetic tolerance to barbiturates can occur. The former contributes more to the decreased effect than does the latter. With chronic administration of gradually increasing doses, pharmacodynamic tolerance continues to develop over a period of weeks to months, depending on the dosage schedule, whereas pharmacokinetic tolerance reaches its peak in a few days to a week. Tolerance to the effects on mood, sedation, and hypnosis occurs more readily and is greater than that to the anticonvulsant and lethal effects; thus, as tolerance increases, the therapeutic index decreases. Pharmacodynamic tolerance to barbiturates confers tolerance to all general CNS-depressant drugs, including ethanol. Abuse and Dependence Like other CNS-depressant drugs, barbiturates are abused, and some individuals develop a

dependence upon them. These topics are discussed in Chapter 24: Drug Addiction and Drug Abuse. Sites and Mechanisms of Action on the CNS Barbiturates act throughout the CNS; nonanesthetic doses preferentially suppress polysynaptic responses. Facilitation is diminished, and inhibition usually is enhanced. The site of inhibition is either postsynaptic, as at cortical and cerebellar pyramidal cells and in the cuneate nucleus, substantia nigra, and thalamic relay neurons, or presynaptic, as in the spinal cord. Enhancement of inhibition occurs primarily at synapses where neurotransmission is mediated by GABA acting at GABAA receptors. The barbiturates exert several distinct effects on excitatory and inhibitory synaptic transmission. For example, ()-pentobarbital potentiates GABA-induced increases in chloride conductance and depresses voltage-activated Ca2+ currents at similar concentrations (below 10 M) in isolated hippocampal neurons; above 100 M, chloride conductance is increased in the absence of GABA (ffrench-Mullen et al., 1993). Phenobarbital is less efficacious and much less potent in producing these effects, while (+)-pentobarbital has only weak activity. Thus, the more selective anticonvulsant properties of phenobarbital and its higher therapeutic index may be explained by its lower capacity to produce profound depression of neuronal function as compared with the anesthetic barbiturates. As noted earlier in this chapter, the mechanisms underlying the actions of barbiturates on GABAA receptors appear to be distinct from those of either GABA or the benzodiazepines, for reasons that include the following: (1) Although barbiturates also enhance the binding of GABA to GABAA receptors in a chloride-dependent and picrotoxin-sensitive fashion, they promote (rather than displace) the binding of benzodiazepines. (2) Barbiturates potentiate GABA-induced chloride currents by prolonging periods during which bursts of channel opening occur rather than by increasing the frequency of these bursts, as benzodiazepines do. (3) Only and (not ) subunits are required for barbiturate action. (4) Barbiturate-induced increases in chloride conductance are not affected by the deletion of the tyrosine and threonine residues in the subunit that govern the sensitivity of GABAA receptors to activation by agonists (Amin and Weiss, 1993). Subanesthetic concentrations of barbiturates also can reduce glutamate-induced depolarizations (seeChapter 12: Neurotransmission and the Central Nervous System; Macdonald and McLean, 1982); only the AMPA subtypes of glutamate receptors sensitive to kainate or quisqualate appear to be affected (Marszalec and Narahashi, 1993). Recombinant AMPA receptors also are blocked by barbiturates. At higher concentrations that produce anesthesia, pentobarbital suppresses highfrequency, repetitive firing of neurons, apparently as a result of inhibiting the function of voltagedependent, tetrodotoxin-sensitive Na+ channels; in this case, however, both stereoisomers are about equally effective (Frenkel et al., 1990). At still higher concentrations, voltage-dependent K+ conductances are reduced. Taken together, the findings that barbiturates activate inhibitory GABAA receptors and inhibit excitatory AMPA receptors can explain the CNS-depressant effects of these agents. The mechanism of action of barbiturates has been reviewed by Saunders and Ho (1990). Peripheral Nervous Structures Barbiturates selectively depress transmission in autonomic ganglia and reduce nicotinic excitation by choline esters. This effect may account, at least in part, for the fall in blood pressure produced by intravenous oxybarbiturates and by severe barbiturate intoxication. At skeletal neuromuscular

junctions, the blocking effects of both tubocurarine and decamethonium are enhanced during barbiturate anesthesia. These actions probably result from the capacity of barbiturates at hypnotic or anesthetic concentrations to inhibit the passage of current through nicotinic cholinergic receptors. Several distinct mechanisms appear to be involved, and little stereoselectivity is evident (Roth et al., 1989). Respiration Barbiturates depress both the respiratory drive and the mechanisms responsible for the rhythmic character of respiration. The neurogenic drive is diminished by hypnotic doses, but usually no more so than during natural sleep. However, neurogenic drive is essentially eliminated by a dose three times greater than that normally used to induce sleep. Such doses also suppress the hypoxic drive and, to a lesser extent, the chemoreceptor drive. At still higher doses, the powerful hypoxic drive also fails. However, the margin between the lighter planes of surgical anesthesia and dangerous respiratory depression is sufficient to permit the ultra-short-acting barbiturates to be used, with suitable precautions, as anesthetic agents. The barbiturates only slightly depress protective reflexes until the degree of intoxication is sufficient to produce severe respiratory depression. Coughing, sneezing, hiccoughing, and laryngospasm may occur when barbiturates are employed as intravenous anesthetic agents. Indeed, laryngospasm is one of the chief complications of barbiturate anesthesia. Cardiovascular System When given orally in sedative or hypnotic doses, the barbiturates do not produce significant overt cardiovascular effects except for a slight decrease in blood pressure and heart rate such as occurs in normal sleep. In general, the effects of thiopental anesthesia on the cardiovascular system are benign in comparison with those of the volatile anesthetic agents; there is usually either no change or a fall in mean arterial pressure. Apparently, a decrease in cardiac output usually is sufficient to offset an increase in total calculated peripheral resistance, which sometimes is accompanied by an increase in heart rate. Cardiovascular reflexes are obtunded by partial inhibition of ganglionic transmission. This is most evident in patients with congestive heart failure or hypovolemic shock whose reflexes already are operating maximally and in whom barbiturates can cause an exaggerated fall in blood pressure. Because barbiturates also impair reflex cardiovascular adjustments to inflation of the lung, positive-pressure respiration should be used cautiously and only when necessary to maintain adequate pulmonary ventilation in patients who are anesthetized or intoxicated with a barbiturate. Other cardiovascular changes often noted when thiopental and other intravenous thiobarbiturates are administered after conventional preanesthetic medication include a decrease in renal plasma flow and in cerebral blood flow, with a marked fall in CSF pressure. Although cardiac arrhythmias are observed only infrequently, intravenous anesthesia with barbiturates can increase the incidence of ventricular arrhythmias, especially when epinephrine and halothane are also present. Anesthetic concentrations of barbiturates have direct electrophysiological effects in the heart; in addition to depressing Na+ channels, they reduce the function of at least two types of K + channels (Nattel et al., 1990; Pancrazio et al., 1993). However, direct depression of cardiac contractility occurs only when doses several times those required to cause anesthesia are administered, which probably contributes to the cardiovascular depression that accompanies acute barbiturate poisoning. Gastrointestinal Tract

The oxybarbiturates tend to decrease the tonus of the gastrointestinal musculature and the amplitude of rhythmic contractions. The locus of action is partly peripheral and partly central, depending on the dose. A hypnotic dose does not significantly delay gastric emptying in human beings. The relief of various gastrointestinal symptoms by sedative doses is probably largely due to the centraldepressant action. Liver The best-known effects of barbiturates on the liver are those on the microsomal drug-metabolizing system (seeChapter 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination). Acutely, the barbiturates combine with several species of cytochrome P450 and competitively interfere with the biotransformation of a number of other drugs as well as of endogenous substrates, such as steroids; other substrates may reciprocally inhibit barbiturate biotransformations. Drug interactions may result even when the other substances and barbiturates are oxidized by different microsomal enzyme systems. The chronic administration of barbiturates causes a marked increase in the protein and lipid content of the hepatic smooth endoplasmic reticulum, as well as in the activities of glucuronyl transferase and the oxidases containing cytochrome P450. The inducing effect on these enzymes results in an increased rate of metabolism of a number of drugs and endogenous substances, including steroid hormones, cholesterol, bile salts, and vitamins K and D. An increase in the rate of barbiturate metabolism also results, which accounts for part of the tolerance to barbiturates. Many sedativehypnotics, various anesthetics, and ethanol also are metabolized by and/or induce the microsomal enzymes, and some degree of cross-tolerance can occur on this basis. Not all microsomal biotransformations of drugs and endogenous substrates are affected to the same degree, but a convenient rule of thumb is that, at maximal induction in human beings, the rates are approximately doubled. The inducing effect is not limited to the microsomal enzymes; for example, there is an increase in -aminolevulinic acid (ALA) synthetase, a mitochondrial enzyme, and aldehyde dehydrogenase, a cytoplasmic enzyme. The effect of barbiturates on ALA synthetase can cause dangerous exacerbations of disease in persons with intermittent porphyria. Kidney Severe oliguria or anuria may occur in acute barbiturate poisoning, largely as a result of the marked hypotension. Absorption, Fate, and Excretion For sedative-hypnotic use, the barbiturates usually are administered orally (seeTable 174). Such doses are rapidly and probably completely absorbed; sodium salts are absorbed more rapidly than the corresponding free acids, especially from liquid formulations. The onset of action varies from 10 to 60 minutes, depending on the agent and the formulation, and is delayed by the presence of food in the stomach. When necessary, intramuscular injections of solutions of the sodium salts should be placed deeply into large muscles in order to avoid the pain and possible necrosis that can result at more superficial sites. With some agents, special preparations are available for rectal administration. The intravenous route is usually reserved for the management of status epilepticus (phenobarbital sodium) or for the induction and/or maintenance of general anesthesia (e.g., thiopental, methohexital). Barbiturates are distributed widely and readily cross the placenta. The highly lipid-soluble barbiturates, led by those used to induce anesthesia, undergo redistribution after intravenous

injection. Uptake into less vascular tissues, especially muscle and fat, leads to a decline in the concentration of barbiturate in the plasma and brain. With thiopental and methohexital, this results in the awakening of patients within 5 to 15 minutes of the injection of the usual anesthetic doses. With the exception of the less lipid-soluble aprobarbital and phenobarbital, nearly complete metabolism and/or conjugation of barbiturates in the liver precedes their renal excretion. The oxidation of radicals at C5 is the most important biotransformation responsible for termination of biological activity. Oxidation results in the formation of alcohols, ketones, phenols, or carboxylic acids, which may appear in the urine as such or as glucuronic acid conjugates. In some instances (e.g., phenobarbital), N-glycosylation is an important metabolic pathway. Other biotransformations include N-hydroxylation, desulfuration of thiobarbiturates to oxybarbiturates, opening of the barbituric acid ring, and N-dealkylation of N-alkylbarbiturates to active metabolites (e.g., mephobarbital to phenobarbital). About 25% of phenobarbital and nearly all of aprobarbital are excreted unchanged in the urine. Their renal excretion can be greatly increased by osmotic diuresis and/or alkalinization of the urine. The metabolic elimination of barbiturates is more rapid in young people than in the elderly and infants, and half-lives are increased during pregnancy, partly because of the expanded volume of distribution. Chronic liver disease, especially cirrhosis, often increases the half-life of the biotransformable barbiturates. Repeated administration, especially of phenobarbital, shortens the half-life of barbiturates that are metabolized as a result of the induction of microsomal enzymes (see above). The data on half-lives given in Table 174 show that none of the barbiturates used for hypnosis in the United States appears to have an elimination half-life that is short enough for elimination to be virtually complete in 24 hours. However, the relationship between duration of action and half-time of elimination is complicated in part by the fact that enantiomers of optically active barbiturates often differ in both biological potencies and rates of biotransformation. Nevertheless, all of these barbiturates will accumulate during repetitive administration unless appropriate adjustments in dosage are made. Furthermore, the persistence of the drug in plasma during the day favors the development of tolerance and abuse. Untoward Effects After Effects Drowsiness may last for only a few hours after a hypnotic dose of barbiturate, but residual depression of the CNS sometimes is evident the following day. Even in the absence of overt evidence of residual depression, subtle distortions of mood and impairment of judgment and fine motor skills may be demonstrable. For example, a 200-mg dose of secobarbital has been shown to impair performance of driving or flying skills for 10 to 22 hours. Residual effects also may take the form of vertigo, nausea, vomiting, or diarrhea, or sometimes may be manifested as overt excitement. The user may awaken slightly intoxicated and feel euphoric and energetic; later, as the demands of daytime activities challenge possibly impaired faculties, the user may display irritability and temper. Paradoxical Excitement In some persons, barbiturates repeatedly produce excitement rather than depression, and the patient may appear to be inebriated. This type of idiosyncrasy is relatively common among geriatric and

debilitated patients and occurs most frequently with phenobarbital and N-methylbarbiturates. Pain Barbiturates have been prescribed for localized or diffuse myalgic, neuralgic, or arthritic pain but often do not effectively treat these symptoms, especially in psychoneurotic patients with insomnia. Barbiturates may cause restlessness, excitement, and even delirium when given in the presence of pain and may make a patient's perception of pain worse. Hypersensitivity Allergic reactions occur especially in persons who tend to have asthma, urticaria, angioedema, and similar conditions. Hypersensitivity reactions in this category include localized swellings, particularly of the eyelids, cheeks, or lips, and erythematous dermatitis. Rarely, exfoliative dermatitis may be caused by phenobarbital and can prove fatal; the skin eruption may be associated with fever, delirium, and marked degenerative changes in the liver and other parenchymatous organs. Drug Interactions Barbiturates combine with other CNS depressants to cause severe depression; ethanol is the most frequent offender, and interactions with antihistamines are also common. Isoniazid, methylphenidate, and monoamine oxidase inhibitors also increase the CNS-depressant effects. Barbiturates competitively inhibit the metabolism of certain other drugs; however, the greatest number of drug interactions results from induction of hepatic microsomal enzymes and the accelerated disappearance of many drugs and endogenous substances. The metabolism of vitamins 2+ D and K are accelerated, which may hamper bone mineralization and lower Ca absorption in patients taking phenobarbital and may be responsible for the reported instances of coagulation defects in neonates whose mothers had been taking phenobarbital. Hepatic enzyme induction enhances metabolism of endogenous steroid hormones, which may cause endocrine disturbances, as well as of oral contraceptives, which may result in unwanted pregnancy. Barbiturates also induce the hepatic generation of toxic metabolites of chlorocarbon anesthetics and carbon tetrachloride and consequently promote lipid peroxidation, which facilitates the periportal necrosis of the liver caused by these agents. Other Untoward Effects Because barbiturates enhance porphyrin synthesis, they are absolutely contraindicated in patients with acute intermittent porphyria or porphyria variegata. In hypnotic doses, the effects of barbiturates on the control of respiration are minor; however, in the presence of pulmonary insufficiency, serious respiratory depression may occur, and the drugs are thus contraindicated. Rapid intravenous injection of a barbiturate may cause cardiovascular collapse before anesthesia ensues, so that the CNS signs of depth of anesthesia may fail to give an adequate warning of impending toxicity. Blood pressure can fall to shock levels; even slow intravenous injection of barbiturates often produces apnea and occasionally laryngospasm, coughing, and other respiratory difficulties. Barbiturate Poisoning The incidence of barbiturate poisoning has declined markedly in recent years, largely as a result of

the decline in the use of these drugs as sedative-hypnotic agents. Nevertheless, poisoning with barbiturates is a significant clinical problem; death occurs in a few percent of cases. Most of the cases are the result of deliberate attempts at suicide, but some are from accidental poisonings in children or in drug abusers. The lethal dose of barbiturate varies with many factors, but severe poisoning is likely to occur when more than ten times the full hypnotic dose has been ingested at once. If alcohol or other depressant drugs are also present, the concentrations that can cause death are lower. In severe intoxication, the patient is comatose; respiration is affected early. Breathing may be either slow or else rapid and shallow. Superficial observation of respiration may be misleading with regard to actual minute volume and to the degree of respiratory acidosis and cerebral hypoxia. Eventually, blood pressure falls owing to the effect of the drug and of hypoxia on medullary vasomotor centers; depression of cardiac contractility and sympathetic ganglia also contribute. Pulmonary complications (atelectasis, edema, and bronchopneumonia) and renal failure are likely to be the fatal complications of severe barbiturate poisoning. The optimal treatment of acute barbiturate intoxication is based on general supportive measures. Hemodialysis or hemoperfusion is only rarely necessary, and the use of CNS stimulants increases the rate of mortality. The present treatment is applicable in most respects to poisoning by any CNS depressant. Constant attention must be given to the maintenance of a patent airway and adequate ventilation and to the prevention of pneumonia; oxygen should be administered. After precautions to avoid aspiration, gastric lavage should be considered if fewer than 24 hours have elapsed since ingestion, since the barbiturate can reduce gastrointestinal motility. After lavage, the administration of activated charcoal and a cathartic such as sorbitol may shorten the half-life of the less lipid-soluble agents such as phenobarbital. If renal and cardiac function are satisfactory and the patient is hydrated, forced diuresis and alkalinization of the urine will hasten the excretion of aprobarbital and phenobarbital. Measures to prevent or treat atelectasis should be taken, and mechanical ventilation should be initiated when indicated. In severe acute barbiturate intoxication, circulatory collapse is a major threat. Often the patient is admitted to the hospital with severe hypotension or shock, and dehydration is often severe. Hypovolemia must be corrected, and, if necessary, the blood pressure can be supported with dopamine. Acute renal failure consequent to shock and hypoxia accounts for perhaps one-sixth of the deaths. In the event of renal failure, hemodialysis should be instituted. Intoxication by barbiturates and its management have been reviewed by Gary and Tresznewsky (1983). Therapeutic Uses The use of barbiturates as sedative-hypnotic drugs has declined enormously because they lack specificity of effect in the CNS, they have a lower therapeutic index than do the benzodiazepines, tolerance occurs more frequently than with benzodiazepines, the liability for abuse is greater, and the number of drug interactions is considerable. The major uses of individual barbiturates are listed in Table 174. Like the benzodiazepines, selection of particular barbiturates for a given therapeutic indication is based primarily on pharmocokinetic considerations. CNS Uses Although barbiturates largely have been replaced by benzodiazepines and other compounds for daytime sedation, phenobarbital and butabarbital are still available as "sedatives" in a host of

combinations of questionable efficacy for the treatment of functional gastrointestinal disorders and asthma. They also are included in analgesic combinations, possibly counterproductively. Barbiturates, especially butabarbital and phenobarbital, are sometimes used to antagonize unwanted CNS-stimulant effects of various drugs, such as ephedrine, dextroamphetamine, and theophylline, although a preferred approach is adjustment of dosage or substitution of alternative therapy for the primary agents. Phenobarbital still is a widely used, and probably the only effective, treatment for hypnosedative withdrawal (Martin et al., 1979). Barbiturates are still employed in the emergency treatment of convulsions, such as occur in tetanus, eclampsia, status epilepticus, cerebral hemorrhage, and poisoning by convulsant drugs; however, benzodiazepines generally are superior in these uses. Phenobarbital sodium is most frequently used because of its anticonvulsant efficacy; however, even when administered intravenously, 15 minutes or more may be required for it to attain peak concentrations in the brain. The ultrashort- and shortacting barbiturates have a low ratio of anticonvulsant to hypnotic action, and these drugs or inhalational anesthetic agents are employed only when general anesthesia must be used to control seizures refractory to other measures. Diazepam usually is chosen for the emergency treatment of seizures. The use of barbiturates in the symptomatic therapy of epilepsy is discussed in Chapter 21: Drugs Effective in the Therapy of the Epilepsies. Ultrashort-acting agents such as thiopental or methohexital continue to be employed as intravenous anesthetics (Chapter 14: General Anesthetics). In children, the rectal administration of methohexital sometimes is used for the induction of anesthesia or for sedation during imaging procedures (Manuli and Davies, 1993). Short- and ultrashort-acting barbiturates occasionally are used as adjuncts to other agents in the production of obstetrical anesthesia. Several studies have failed to affirm gross depression of respiration in full-term infants, but premature infants clearly are more susceptible. Since evaluation of the effects on the fetus and neonate is difficult, it is prudent to avoid the use of barbiturates in obstetrics. The barbiturates are employed as diagnostic and therapeutic aids in psychiatry; these uses sometimes are referred to as narcoanalysis and narcotherapy, respectively. In low concentrations, amobarbital has been administered directly into the carotid artery prior to neurosurgery as a means of identifying the dominant cerebral hemisphere for speech. The use of this procedure has been expanded to include a more extensive neuropsychological evaluation of patients with medically intractable seizure disorders who may benefit from surgical therapy (seeSmith and Riskin, 1991). Anesthetic doses of barbiturates attenuate cerebral edema resulting from surgery, head injury, or cerebral ischemia, and they may decrease infarct size and increase survival. General anesthetics do not provide protection. The procedure is not without serious danger, however, and the ultimate benefit to the patient has been questioned (seeShapiro, 1985; Smith and Riskin, 1991). Hepatic Metabolic Uses Because hepatic glucuronyl transferase and the bilirubin-binding Y protein are increased by the barbiturates, phenobarbital has been used successfully to treat hyperbilirubinemia and kernicterus in the neonate. The nondepressant barbiturate phetharbital (N-phenylbarbital) works equally well. Phenobarbital may improve the hepatic transport of bilirubin in patients with hemolytic jaundice.

Miscellaneous Sedative-Hypnotic Drugs Over the years, many drugs with diverse structures have been used for their sedative-hypnotic

properties, including paraldehyde (introduced before the barbiturates), chloral hydrate, ethchlorvynol, glutethimide, methyprylon, ethinamate, and meprobamate (introduced just before the benzodiazepines). With the exception of meprobamate, the pharmacological actions of these drugs generally resemble those of the barbiturates: They all are general CNS depressants that can produce profound hypnosis with little or no analgesia; their effects on the stages of sleep are similar to those of the barbiturates; their therapeutic index is limited, and acute intoxication, which produces respiratory depression and hypotension, is managed similarly to barbiturate poisoning; their chronic use can result in tolerance and physical dependence; and the syndrome following chronic use can be severe and life threatening. The properties of meprobamate bear some resemblance to those of the benzodiazepines, but the drug has a distinctly higher potential for abuse and has less selective antianxiety effects. The clinical use of these agents has decreased markedly, and deservedly so. Nevertheless, some of them are useful in certain settings, particularly in hospitalized patients. The chemical structures and major pharmacological properties of paraldehyde, ethchlorvynol, chloral hydrate, and meprobamate are presented in Table 175. Further information on glutethimide, methyprylon, and ethinamate can be found in previous editions of this book. Paraldehyde Paraldehyde is a polymer of acetaldehyde, but it perhaps is best regarded as a polyether of cyclic structure. It has a strong aromatic odor and a disagreeable taste. Orally, it is irritating to the throat and stomach, and it is not administered parenterally because of its injurious effects on tissues. When given rectally as a retention enema, the drug is diluted with olive oil. Oral paraldehyde is rapidly absorbed and widely distributed; sleep usually ensues in 10 to 15 minutes after hypnotic doses. About 70% to 80% of a dose is metabolized in the liver, probably by depolymerization to acetaldehyde and subsequent oxidation to acetic acid, which is ultimately converted to carbon dioxide and water; most of the remainder is exhaled, producing a strong, characteristic smell to the breath. Commonly observed consequences of poisoning with the drug include acidosis, bleeding gastritis, and fatty changes in the liver and kidney with toxic hepatitis and nephrosis. The clinical uses of paraldehyde include the treatment of abstinence phenomena (especially delirium tremens in hospitalized patients) and other psychiatric states characterized by excitement. Paraldehyde also has been used for the treatments of convulsions (including status epilepticus) in children. Individuals who become addicted to paraldehyde may have become acquainted with the drug during treatment of their alcoholism and then, surprisingly in view of its disagreeable taste and odor, prefer it to alcohol. Chloral Hydrate Chloral hydrate is formed by adding one molecule of water to the carbonyl group of chloral (2,2,2trichloroacetaldehyde). In addition to its hypnotic use, the drug has been employed in the past for the production of sedation in children undergoing diagnostic, dental, or other potentially uncomfortable procedures. Chloral hydrate is rapidly reduced to the active compound, trichloroethanol (CCl 3CH2OH), largely by alcohol dehydrogenase in the liver; significant amounts of chloral hydrate are not found in the blood after its oral administration. Therefore, its pharmacological effects probably are caused by trichloroethanol. Indeed, the latter compound can exert barbiturate-like effects on GABAA receptor channels in vitro (Lovinger et al., 1993). Trichloroethanol is mainly conjugated with glucuronic

acid, and the product (urochloralic acid) is excreted mostly into the urine. Chloral hydrate is irritating to the skin and mucous membranes. These irritant actions give rise to an unpleasant taste, epigastric distress, nausea, and occasional vomiting, all of which are particularly likely to occur if the drug is insufficiently diluted or if it is taken on an empty stomach. Undesirable CNS effects include light-headedness, malaise, ataxia, and nightmares. "Hangover" also may occur, although it is less common than with most barbiturates and some benzodiazepines. Rarely, patients exhibit idiosyncratic reactions to chloral hydrate and may be disoriented and incoherent and show paranoid behavior. Acute poisoning by chloral hydrate may cause icterus. Individuals using chloral hydrate chronically may exhibit sudden, acute intoxication, which can be fatal; this situation results either from an overdose or from a failure of the detoxification mechanism owing to hepatic damage; parenchymatous renal injury also may occur. Sudden withdrawal from the habitual use of chloral hydrate may result in delirium and seizures, with a high frequency of death when untreated. Ethchlorvynol In addition to pharmacological actions that are very similar to those of barbiturates, ethchlorvynol has anticonvulsant and muscle relaxant properties. Ethchlorvynol is rapidly absorbed and widely distributed following oral administration. Two-compartment kinetics is manifest, with a distribution half-life of about 1 to 3 hours and an elimination half-life of 10 to 20 hours. As a result, the duration of action of the drug is relatively short, and early morning awakening may occur after its administration at bedtime. Approximately 90% of the drug eventually is destroyed in the liver. Ethchlorvynol is used as a short-term hypnotic for the management of insomnia. The most common side effects caused by ethchlorvynol are mint-like aftertaste, dizziness, nausea, vomiting, hypotension, and facial numbness. Mild "hangover" also is relatively common. An occasional patient responds with profound hypnosis, muscular weakness, and syncope unrelated to marked hypotension. Idiosyncratic responses range from mild stimulation to marked excitement and hysteria. Hypersensitivity reactions include urticaria, rare but sometimes fatal thrombocytopenia, and occasionally cholestatic jaundice. Acute intoxication resembles that produced by barbiturates, except for more severe respiratory depression and a relative bradycardia. Ethchlorvynol may enhance the hepatic metabolism of other drugs such as oral anticoagulants, and it is contraindicated in patients with intermittent porphyria. Meprobamate Meprobamate is a bis-carbamate ester; it was introduced as an antianxiety agent in 1955, and this remains its only approved use in the United States. However, it also became popular as a sedativehypnotic drug, and it is discussed here mainly because of the continuing practice of using this drug for such purposes. The question of whether or not the sedative and antianxiety actions of meprobamate differ remains unanswered, and clinical proof for the efficacy of meprobamate as a selective antianxiety agent in human beings is lacking. The pharmacological properties of meprobamate resemble those of the benzodiazepines in a number of ways. Like the benzodiazepines, meprobamate can release suppressed behaviors in experimental animals at doses that cause little impairment of locomotor activity, and, although it can cause widespread depression of the CNS, it cannot produce anesthesia. Unlike the benzodiazepines, ingestion of large doses of meprobamate alone may cause severe or even fatal respiratory depression, hypotension, shock, and heart failure. Meprobamate appears to have a mild analgesic effect in patients with musculoskeletal pain, and it enhances the analgesic effects of other drugs.

Meprobamate is well absorbed when administered orally. Nevertheless, an important aspect of intoxication with meprobamate is the formation of gastric bezoars consisting of undissolved meprobamate tablets; hence, treatment may require endoscopy, with mechanical removal of the bezoar. Most of the drug is metabolized in the liver, mainly to a side-chain hydroxy derivative and a glucuronide; the kinetics of elimination may be dependent on the dose. The half-life of meprobamate may be prolonged during its chronic administration, even though the drug can induce some hepatic microsomal enzymes. The major unwanted effects of the usual sedative doses of meprobamate are drowsiness and ataxia; larger doses produce considerable impairment of learning and motor coordination and prolongation of reaction time. Like the benzodiazepines, meprobamate enhances the CNS depression produced by other drugs. The abuse of meprobamate has continued despite a substantial decrease in the clinical use of the drug. Carisoprodol (SOMA), a skeletal muscle relaxant whose active metabolite is meprobamate, also has abuse potential and has become a popular "street drug" (Reeves et al., 1999). Meprobamate is preferred to the benzodiazepines by subjects with a history of drug abuse. After long-term medication, abrupt discontinuation evokes a withdrawal syndrome usually characterized by anxiety, insomnia, tremors, and, frequently, hallucinations; generalized seizures occur in about 10% of cases. The intensity of symptoms depends on the dosage ingested. Others Etomidate (AMIDATE) is used in the United States and other countries as an intravenous anesthetic, often in combination with fentanyl. It is advantageous because it lacks pulmonary and vascular depressant activity, although it has a negative inotropic effect on the heart. Its pharmacology and anesthetic uses are described in Chapter 14: General Anesthetics. It also is used abroad as a sedative-hypnotic drug in intensive care units, during intermittent positive-pressure breathing, in epidural anesthesia, and in other situations. Because it is administered only intravenously, its use is limited to hospital settings. The myoclonus commonly seen after anesthetic doses is not seen after sedative-hypnotic doses. Clomethiazole has sedative, muscle relaxant, and anticonvulsant properties. It is used outside the United States for hypnosis in elderly and institutionalized patients, for preanesthetic sedation, and especially in the management of withdrawal from ethanol (seeSymposium, 1986b). Given alone, its effects on respiration are slight, and the therapeutic index is high. However, deaths from adverse interactions with ethanol are relatively frequent. Nonprescription Hypnotic Drugs An advisory review panel of the FDA has recommended that, except for certain antihistamines (doxylamine, diphenhydramine, and pyrilamine), all putative active ingredients be eliminated from nonprescription sleep aids. Despite the prominent sedative side effects encountered during their use in the treatment of allergic diseases (seeChapter 25: Histamine, Bradykinin, and Their Antagonists), these antihistamines are not consistently effective in the treatment of sleep disorders. Contributing factors may include the rapid development of tolerance, paradoxical stimulation, and the inadequacy of the doses that currently are approved. Nevertheless, these doses sometimes produce prominent residual daytime CNS depression. For example, the elimination half-lives of doxylamine and diphenhydramine are about 9 hours. Management of Insomnia

Insomnia is one of the most common complaints in general medical practice and its treatment is predicated upon proper diagnosis. A variety of pharmacological agents are available for the treatment of insomnia. The "perfect" hypnotic would allow sleep to occur, with normal sleep architecture, rather than produce a pharmacologically altered sleep pattern. It would not cause nextday effects, either of rebound anxiety or continued sedation. It would not interact with other medications. It could be used chronically without causing dependence or rebound insomnia on discontinuation. Regular moderate exercise meets these criteria, but often is not effective by itself, and patients with significant cardiorespiratory disease may not be able to exercise. However, even small amounts of exercise often are effective in promoting sleep. Although the precise function of sleep is not known, adequate sleep improves the quality of daytime wakefulness, and hypnotics should be used judiciously to avoid its impairment. Controversy in the management of insomnia revolves around two issues: pharmacological versus nonpharmacological treatment and the use of short-acting versus long-acting hypnotics. Benzodiazepine hypnotics have been prescribed less commonly over the past decade. The British tend to take a conservative attitude toward prescribing benzodiazepines, for either anxiety or insomnia (Livingston, 1994). However, Walsh and Engelhardt (1992) think that this reduction in benzodiazepine prescribing may have more to do with media coverage of benzodiazepine side effects than with scientific data and that some patients may be undertreated with hypnotics. Perhaps related to this controversy, Yeo et al. (1994) found that physician self-rating of benzodiazepine prescribing generally greatly underestimated actual prescribing patterns. The side effects of hypnotic medications must be weighed against the sequelae of chronic insomnia, which include a fourfold increase in serious accidents (Balter, 1992). Two aspects of the management of insomnia traditionally have been underappreciated. They are a search for specific medical causes and the use of nonpharmacological treatments. In addition to appropriate pharmacological treatment, the management of insomnia should correct identifiable causes, address inadequate sleep hygiene, eliminate performance anxiety related to falling asleep, provide entrainment of the biological clock so that maximum sleepiness occurs at the hour of attempted sleep, and suppress the use of alcohol and over-the-counter sleep medications (NinoMurcia, 1992). Categories of Insomnia The National Institute of Mental Health Consensus Development Conference (1984) divided insomnia into three categories: 1. Transient insomnia lasts less than three days and usually is caused by a brief environmental or situational stressor. It may respond to attention to sleep hygiene rules. If hypnotics are prescribed, they should be used at the lowest dose and for only two to three nights. However, benzodiazepines given acutely prior to important life events, such as examinations, may result in impaired performance (James and Savage, 1984). 2. Short-term insomnia lasts from 3 days to 3 weeks and usually is caused by a personal stressor such as illness, grief, or job problems. Again, sleep hygiene education is the first step. Hypnotics may be used adjunctively for 7 to 10 nights. Hypnotics are best used intermittently during this time, with the patient skipping a dose after one to two nights of good sleep. 3. Long-term insomnia is insomnia that has lasted for more than 3 weeks; no specific stressor may be identifiable. A more complete medical evaluation is necessary in these patients, but most do not need an all-night sleep study.

Insomnia Accompanying Major Psychiatric Illnesses The insomnia caused by major psychiatric illnesses often responds to specific pharmacological treatment for that illness. For example, in major depressive episodes with insomnia, even such medications as the selective serotonin-reuptake inhibitors, which may cause insomnia as a side effect, usually will result in improved sleep as they treat the depressive syndrome. In patients whose depression is responding to the serotonin-reuptake inhibitor but who have persistent insomnia as a side effect of the medication, judicious use of evening trazodone may improve sleep (Nierenberg et al., 1994) as well as augment the antidepressant effect of the reuptake inhibitor. However, the patient should be monitored for priapism, orthostatic hypotension, and arrhythmias. Adequate control of anxiety in patients with anxiety disorders often produces adequate resolution of the accompanying insomnia. Sedative use in the anxiety disorders is decreasing because of a growing appreciation of the effectiveness of other agents, such as -adrenergic receptor antagonists (seeChapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists) for performance anxiety and serotonin-reuptake inhibitors for obsessive-compulsive disorder and perhaps generalized anxiety disorder. The profound insomnia of patients with acute psychosis due to schizophrenia or mania usually responds to dopamine-receptor antagonists. Benzodiazepines often are used adjunctively in this situation to reduce agitation; their use also will result in improved sleep. Insomnia Accompanying Other Medical Illnesses For long-term insomnia due to other medical illnesses, adequate treatment of the underlying disorder, such as congestive heart failure, asthma, or chronic obstructive pulmonary disease, may resolve the insomnia. Adequate pain management in conditions of chronic pain, including terminal cancer pain, will treat both the pain and the insomnia and may make hypnotics unnecessary. Many patients simply manage their sleep poorly. Adequate attention to sleep hygiene, including reduced caffeine intake, avoidance of alcohol, adequate exercise, and regular sleep and wake times often will reduce the insomnia. Conditioned (Learned) Insomnia In those who have no major psychiatric or other medical illness and in whom attention to sleep hygiene is ineffective, attention should be directed to conditioned (learned) insomnia. These patients have associated the bedroom with activities consistent with wakefulness rather than sleep. In such patients, the bed should be used only for sex and sleep. All other activities associated with waking, even such quiescent activities as reading and watching television, should be done outside of the bedroom. Sleep State Misperception Some patients complain of poor sleep but have been shown to have no objective polysomnographic evidence of insomnia. They are difficult to treat. Some patients are simply consitutional short sleepers, who do not need the typical seven to eight hours of sleep per day to function. If daytime wakefulness, mood, and functioning are unimpaired,

no treatment is necessary. Some patients with sleep apnea may ask for sleeping pills because they do not feel rested in the morning. Hypnotic agents usually are contraindicated in such patients. These individuals benefit from all-night sleep studies for proper evaluation and recommendations for appropriate treatment. Long-Term Insomnia Nonpharmacological treatments are important for all patients with long-term insomnia. These include education about sleep hygiene, adequate exercise (where possible), relaxation training, and behavioral modification approaches, such as sleep restriction and stimulus control therapy. In sleep restriction therapy, the patient keeps a diary of the amount of time spent in bed and then chooses a time in bed of 30 to 60 minutes less than this time. This induces a mild sleep debt, which aids sleep onset. In stimulus control therapy, the patient is instructed to go to bed only when sleepy, to use the bedroom only for sleep and sex, to get up and leave the bedroom if sleep does not occur within 15 to 20 minutes, to return to bed again only when sleepy, to arise at the same time each morning regardless of sleep quality the preceding night, and to avoid daytime naps. Nonpharmacological treatments for insomnia have been found to be particularly effective in reducing sleep-onset latency and time awake after sleep onset (Morin et al., 1994). Side effects of hypnotic agents may limit their usefulness for insomnia management. The use of hypnotics for long-term insomnia is problematic for many reasons. Long-term hypnotic use leads to a decrease in effectiveness and may produce rebound insomnia upon discontinuance. Almost all hypnotics change sleep architecture. The barbiturates reduce REM sleep; the benzodiazepines reduce slow-wave non-REM sleep and, to a lesser extent, REM sleep. While the significance of these findings is still unclear, there is an emerging consensus that slow-wave sleep is particularly important for physical restorative processes. REM sleep may aid in the consolidation of learning. The blockade of slow-wave sleep by benzodiazepines may help to account for their diminishing effectiveness over the long term, and it also may explain their effectiveness in blocking sleep terrors, a disorder of arousal from slow-wave sleep. Benzodiazepines produce cognitive changes. Long-acting agents can cause next-day confusion, with a concomitant increase in falls, while shorter-acting agents can produce rebound next-day anxiety. Paradoxically, the acute amnestic effects of benzodiazepines may be responsible for the patient's subsequent report of restful sleep. Triazolam has been postulated to induce cognitive changes that blur the subjective distinction between waking and sleeping (Mendelson, 1993). Anterograde amnesia may be more common with triazolam. While the performance-disruptive effects of alcohol and diphenhydramine are reduced after napping, those of triazolam are not (Roehrs et al., 1993). Benzodiazepines may worsen sleep apnea. Some hypersomnia patients do not feel refreshed after a night's sleep and so may ask for sleeping pills to improve the quality of their sleep. The consensus is that hypnotics should not be given to the patients with sleep apnea, especially of the obstructive type, because these agents decrease upper airway muscle tone while also decreasing the arousal response to hypoxia (Robinson and Zwillich, 1989). Insomnia in Older Patients The elderly, like the very young, tend to sleep in a polyphasic (multiple sleep episodes per day) pattern, rather than the monophasic pattern characteristic of younger adults. They may have single or multiple daytime naps in addition to nighttime sleep. This pattern makes assessment of adequate

sleep time difficult. Anyone who naps regularly will have shortened nighttime sleep without evidence of impaired daytime wakefulness, regardless of age. This pattern is exemplified in "siesta" cultures and probably is adaptive. Changes in the pharmacokinetic profiles of hypnotic agents occur in the elderly because of reduced body water, reduced renal function, and increased body fat, leading to a longer half-life for benzodiazepines. A dose that produces pleasant sleep and adequate daytime wakefulness during week 1 of administration may produce daytime confusion and amnesia by week 3 as the level continues to rise, particularly with long-acting hypnotics. For example, the benzodiazepine diazepam is highly lipid soluble and is excreted by the kidney. Because of the increase in body fat and the decrease in renal excretion that typically occurs from age 20 to 80, the half-life of the drug may increase fourfold over this span. Elderly people who are living full lives with relatively unimpaired daytime wakefulness may complain of insomnia because they are not sleeping as long as they did when they were younger. Injudicious use of hypnotics in these individuals can produce daytime cognitive impairment and so impair overall quality of life. Once an older patient has been taking benzodiazepines for an extended period, whether for daytime anxiety or nighttime sedation, terminating administration of the drug can be a long, involved process. It may be warranted to leave the patient on the medication, with adequate attention to daytime side effects. Management of Patients Following Long-Term Treatment with Hypnotic Agents Patients who have been taking hypnotics for many months or even years represent a special problem group (Fleming, 1993). If a benzodiazepine has been used regularly for more than 2 weeks, it should be tapered rather than discontinued abruptly. In some patients on hypnotics with a short half-life, it is easier to switch first to a hypnotic with a long half-life and then taper. In a study of nine patients in whom the nonbenzodiazepine agent zopiclone was abruptly substituted for a benzodiazepine agent for 1 month and then itself abruptly terminated, improved sleep was reported during the zopiclone treatment, and withdrawal effects were absent on discontinuation of zopiclone (Shapiro et al., 1993). The onset of withdrawal symptoms from medications with a long half-life may be delayed. Consequently, the patient should be warned about the symptoms associated with withdrawal effects. Prescribing Guidelines for the Management of Insomnia Hypnotics that act at benzodiazepine receptors, including the benzodiazepine hypnotics as well as the newer agents zolpidem, zopiclone, and zaleplon are preferred to barbiturates because they have a greater therapeutic index, are less toxic in overdose, have smaller effects on sleep architecture, and have less abuse potential. Compounds with a shorter half-life are favored in patients with sleeponset insomnia but without significant daytime anxiety who need to function at full effectiveness all day. These compounds also are appropriate for the elderly, because of a decreased risk of falls and respiratory depression. However, the patient and physician should be aware that early morning awakening, rebound daytime anxiety, and amnestic episodes also may occur. These undesirable side effects are more common at higher doses of the benzodiazepines. Benzodiazepines with a longer half-life are favored for patients who have significant daytime anxiety and who may be able to tolerate next-day sedation but would be impaired further by

rebound daytime anxiety. These benzodiazepines also are appropriate for patients receiving treatment for major depressive episodes, because the short-acting agents can worsen early morning awakening. However, longer-acting benzodiazepines can be associated with next-day cognitive impairment or delayed daytime cognitive impairment (after 2 to 4 weeks of treatment) as a result of drug accumulation with repeated administration. Older agents such as barbiturates, glutethimide, and meprobamate should be avoided for the management of insomnia. They have high abuse potential and are dangerous in overdose.

Chapter 18. Ethanol


Overview Ethanol is one of a wide variety of structurally dissimilar agents that depress the functioning of the central nervous system (CNS). Ethanol differs from most other CNS depressants in that it is widely available to adults, and its use is legal and accepted in many societies. Associated with this widespread availability of ethanol are the enormous personal and societal costs of its abuse, with millions of individuals becoming alcohol abusers, or alcoholics. This chapter describes the pharmacological properties of ethanol in terms of its effects on a variety of organ systems including the gastrointestinal, cardiovascular, and central nervous systemsand how ethanol affects disease processes. Effects of ethanol on the developing embryo and fetus are reviewed, as well as the long-term consequences of prenatal exposure to ethanol. Ethanol disturbs the fine balance that exists between excitatory and inhibitory influences in the brain, producing the disinhibition, ataxia, and sedation that follow its consumption. Tolerance to ethanol develops after chronic use, and physical dependence is demonstrated upon alcohol withdrawal. Existing and emerging pharmacotherapies for alcohol dependence are discussed, as well as recent research into the cellular and molecular mechanisms of ethanol actions in vivo, which should aid in the development of rational therapies for alcohol abuse and alcoholism. History and Overview Alcoholic beverages are so strongly associated with human society that fermentation is said to have developed in parallel with civilization. Until recently, alcoholic beverages contained relatively low concentrations of ethanol (the terms ethanol and alcohol are used interchangeably in this chapter), and there is speculation that human alcohol use is linked evolutionarily to a preference for fermenting fruit, where the presence of ethanol signals that the fruit is ripe but not yet rotten (Dudley, 2000). The Arabs developed distillation about A.D. 800, and the word alcohol is derived from the Arabic for "something subtle." Alchemists of the Middle Ages were captivated by the invisible "spirit" that was distilled from wine and thought it to be a remedy for practically all diseases. The term whiskey is derived from usquebaugh, Gaelic for "water of life," and alcohol became the major ingredient of widely marketed "tonics" and "elixirs." Although alcohol abuse and alcoholism are major health problems in many countries, the medical and social impacts of alcohol abuse have not always been appreciated. The economic burden to the United States economy is about $170 billion each year, and alcohol is responsible for more than 100,000 deaths annually. At least 14 million Americans meet the criteria for alcohol abuse or alcoholism, but medical diagnosis and treatment often are delayed until the disease is advanced and

complicated by multiple social and health problems, making treatment difficult. Biological and genetic studies clearly place alcoholism among other diseases with both genetic and environmental influences, but persistent stigmas and attribution to moral failure have impeded recognition and treatment of alcohol problems. A major challenge for physicians and researchers is to devise diagnostic and therapeutic approaches aimed at this major health problem. Compared with other drugs, surprisingly large amounts of alcohol are required for physiological effects, resulting in its consumption more like a food than a drug. The alcohol content of beverages ranges from 4% to 6% (volume/volume) for beer, 10% to 15% for wine, and 40% and higher for distilled spirits. The "proof" of an alcohol-containing beverage is twice its percent alcohol (e.g., 40% alcohol is 80 proof). Remarkably, and contrary to public impressions, the serving size for alcoholic beverages is adjusted so that about 14 grams of alcohol is contained in a glass of beer or wine or a shot of spirits. Thus, alcohol is consumed in gram quantities, whereas most other drugs are taken in milligram or microgram doses. Blood alcohol levels (BALs) in human beings can be estimated readily by the measurement of alcohol levels in expired air; the partition coefficient for ethanol between blood and alveolar air is approximately 2000:1. Because of the causal relationship between excessive alcohol consumption and vehicular accidents, there has been a near universal adoption of laws attempting to limit the operation of vehicles while under the influence of alcohol. Legally allowed BALs typically are set at or below 100 mg % (100 mg of ethanol per deciliter of blood; 0.1% w/v), which is equivalent to a concentration of 22 mM ethanol in blood. A 12-ounce bottle of beer, a 5-ounce glass of wine, and a 1.5-ounce shot of 40% liquor all contain roughly 14 grams of ethanol, and the consumption of one of those beverages by a 70-kg person would produce a BAL of approximately 30 mg %. However, it is important to note that this is approximate, because the blood alcohol level is determined by a number of factors, including the rate of drinking, gender, body weight and water percentage, and the rates of metabolism and stomach emptying (see section on "Acute Ethanol Intoxication"). Pharmacological Properties Absorption, Distribution, and Metabolism After oral administration, ethanol is rapidly absorbed into the bloodstream from the stomach and small intestine and distributes into total body water. Because absorption occurs more rapidly from the small intestine than from the stomach, delays in gastric emptying (due, for example, to the presence of food) slow ethanol absorption. After it enters the bloodstream, alcohol first travels to the liver before it quickly distributes into all body fluids. After oral consumption of alcohol, firstpass metabolism by gastric and liver alcohol dehydrogenase (ADH) enzymes leads to lower blood alcohol levels than would be obtained if the same dose were administered intravenously. Less gastric metabolism of ethanol occurs in women than in men, which may explain in part the greater susceptibility of women to ethanol (Lieber, 2000). Aspirin increases ethanol bioavailability by inhibiting gastric ADH. Although a small amount of ethanol is excreted unchanged in urine, sweat, and breath, most of it (90% to 98%) is metabolized to acetaldehyde and then to acetate, primarily in the liver. ADH, catalase, and a microsomal cytochrome P450 ethanol-oxidizing system all catalyze the oxidation of ethanol to acetaldehyde, with ADH playing the predominant role in the liver. This first step in alcohol metabolism also is the rate-limiting step in determining how quickly ethanol is cleared from the body. The oxidation of ethanol differs from that of most substances, in that it is relatively independent of concentration in blood and is constant with time (zero-order kinetics). On average, about 10 ml of ethanol are oxidized by a 70-kg person each hour (or about 120 mg/kg per hour). Acetaldehyde is rapidly metabolized to acetate by cytosolic and mitochondrial aldehyde dehydrogenase in the liver.

Although the cytochrome P450 system is not usually a major factor in the metabolism of ethanol, it can be an important site of interactions of ethanol with other drugs. The enzyme system is induced by chronic consumption of ethanol, leading to increased clearance of drugs that are substrates for it. There can be decreased clearance of the same drugs, however, after acute consumption of ethanol, as ethanol competes with them for oxidation by the enzyme system (e.g., phenytoin, warfarin). Central Nervous System The public often views alcoholic drinks as stimulating, but ethanol is primarily a central nervous system (CNS) depressant. Ingestion of moderate amounts of ethanol, like that of other depressants such as barbiturates and benzodiazepines, can have antianxiety actions and produce behavioral disinhibition with a wide range of doses. Individual signs of intoxication vary from expansive and vivacious affect to uncontrolled mood swings and emotional outbursts that may have violent components. With more severe intoxication, a general impairment of CNS function occurs, and a condition of general anesthesia ultimately prevails. However, there is little margin between the anesthetic actions and lethal effects (usually due to respiratory depression). About 10% of alcohol drinkers progress to levels of consumption that are physically and socially detrimental. Chronic abuse is accompanied by tolerance, dependence, and craving for the drug (see below for a discussion of neuronal mechanisms; see also Chapter 24: Drug Addiction and Drug Abuse). Alcoholism is characterized by compulsive use despite clearly deleterious social and medical consequences. Alcoholism is a progressive illness, and brain damage from chronic alcohol abuse contributes to the deficits in cognitive functioning and judgment seen in alcoholics. Alcoholism is a leading cause of dementia in the United States (Oslin et al., 1998). Chronic alcohol abuse results in shrinkage of the brain due to loss of both white and gray matter (Kril and Halliday, 1999). The frontal lobes are particularly sensitive to damage by alcohol, and the extent of damage is determined by the amount and duration of alcohol consumption, with older alcoholics being more vulnerable than younger ones (Pfefferbaum et al., 1998). It is important to note that ethanol itself is neurotoxic and, although malnutrition or vitamin deficiencies probably play roles in complications of alcoholism such as Wernicke's encephalopathy and Korsakoff's psychosis, in western countries most of the brain damage in these disorders is due to alcohol per se. In addition to loss of brain tissue, alcohol abuse also reduces brain metabolism (as determined by positron emission tomography), and this hypometabolic state rebounds to a level of increased metabolism during detoxification. The magnitude of decrease in metabolic state is determined by the number of years of alcohol use and the age of the patients (Volkow et al., 1994; see"Mechanisms of CNS Effects of Ethanol," below). Cardiovascular System Serum Lipoproteins and Cardiovascular Effects In most countries, the risk of mortality due to coronary heart disease (CHD) is correlated with a high dietary intake of saturated fat and elevated serum cholesterol levels. France is an exception to this rule, with relatively low mortality from CHD despite the consumption of high quantities of saturated fats by the French (the "French paradox"). Epidemiological studies suggest that widespread wine consumption by the French (20 to 30 g of ethanol per day) is one of the factors conferring a cardioprotective effect, with 1 to 3 drinks per day resulting in a 10% to 40% decreased risk of coronary heart disease, compared to abstainers. In contrast, daily consumption of greater amounts of alcohol leads to an increased incidence of noncoronary causes of cardiovascular failure, such as arrhythmias, cardiomyopathy, and hemorrhagic stroke, offsetting the beneficial effects of alcohol on coronary arteries; i.e., alcohol has a "J-shaped" dose-mortality curve. Reduced risks for

CHD are seen at intakes as low as one-half drink per day (Maclure, 1993). Young women and others at low risk for heart disease derive little benefit from light to moderate alcohol intake, while those of both sexes who are at high risk and who may have had a myocardial infarction clearly benefit. Data based on a number of prospective, cohort, cross-cultural, and case-control studies in diverse populations consistently reveal lower rates of angina pectoris, myocardial infarction, and peripheral artery disease in those consuming light (1 to 20 g/day) to moderate (21 to 40 g/day) amounts of alcohol. One possible mechanism by which alcohol could reduce the risk of CHD is through its effects on blood lipids. Changes in plasma lipoprotein levels, particularly increases in high-density lipoprotein (HDL; see Chapter 36: Drug Therapy for Hypercholesterolemia and Dyslipidemia), have been associated with the protective effects of ethanol. HDL binds cholesterol and returns it to the liver for elimination or reprocessing, decreasing tissue cholesterol levels. Ethanol-induced increases in HDL-cholesterol could thus be expected to antagonize cholesterol buildup on arterial walls, lessening the risk of infarction. Approximately half of the risk reduction associated with ethanol consumption is explained by changes in total HDL levels (Langer et al., 1992). HDL is found as two subfractions, named HDL2 and HDL3. Increased levels of HDL2 (and possibly also HDL3) are associated with reduced risk of myocardial infarction. Levels of both subfractions are increased following alcohol consumption (Gaziano et al., 1993) and decrease when alcohol consumption ceases. Apolipoproteins A-I and A-II are constituents of HDL, with some HDL particles containing only the former, while others are composed of both. Increased levels of both apolipoproteins A-I and A-II are seen in individuals regularly consuming alcohol. In contrast, there are reports of decreased serum apolipoprotein(a) levels following periods of alcohol consumption. Elevated apolipoprotein(a) levels have been associated with an increased risk for the development of atherosclerosis. Although the cardioprotective effects of ethanol initially were noted in wine drinkers, all forms of alcoholic beverages confer cardioprotection. A variety of alcoholic beverages increase HDL levels while decreasing the risk of myocardial infarction. The flavonoids found in red wine (and purple grape juice) may play an extra role in protecting LDL from oxidative damage. Oxidized LDL has been implicated in several steps of atherogenesis. The antiatherogenic effects of alcohol could be mediated by changes in LDL oxidation and elevated estrogen levels (Hillbom et al., 1998). Flavonoids also induce endothelium-dependent vasodilation (Stein et al., 1999). Another way in which alcohol consumption conceivably could play a cardioprotective role is by altering factors involved in blood clotting. The formation of clots is an important step in the genesis of myocardial infarctions, and a number of factors maintain a balance between bleeding and clot dissolution. Alcohol consumption elevates the levels of tissue plasminogen activator, a clot-dissolving enzyme (Ridker et al., 1994; see Chapter 55: Anticoagulant, Thrombolytic, and Antiplatelet Drugs), decreasing the likelihood of clot formation. Decreased fibrinogen concentrations seen following ethanol consumption also could have cardioprotective effects (Rimm et al., 1999), and epidemiological studies have linked the moderate consumption of ethanol to an inhibition of platelet activation (Rubin, 1999). A question worth addressing is whether or not abstainers from alcohol should be advised to begin the consumption of moderate amounts of ethanol. The answer is no. It is important to note that there have been no randomized clinical trials to test the efficacy of daily alcohol use in reducing rates of coronary heart disease and mortality, and it is not appropriate for physicians to advocate the ingestion of alcohol solely to prevent heart disease. Many abstainers avoid alcohol because of a family history of alcoholism or for other health reasons, and it is not prudent to suggest that they begin drinking. Other lifestyle changes or medical treatments should be encouraged if patients are at

risk for the development of CHD. Hypertension Heavy alcohol use can raise diastolic and systolic blood pressure (Klatsky, 1996). Studies indicate a positive, nonlinear association between alcohol use and hypertension, unrelated to age, education, smoking status, or the use of birth control medication. Consumption above 30 grams of alcohol per day (more than two standard drinks) is associated with a 1.5- to 2.3-mm Hg rise in diastolic and systolic blood pressure. A time effect also has been demonstrated, with diastolic and systolic blood pressure elevation being greatest for persons who consumed alcohol within 24 hours of examination (Moreira et al., 1998). Women may be at greater risk than men (Seppa et al., 1996). A number of hypotheses have been proposed to explain the cause of alcohol-induced hypertension. One is that some hypertensive alcoholic patients abstain before a physician visit (Iwase et al., 1995). As blood alcohol levels fall, acute withdrawal causes an elevation in blood pressure that is reflected in elevated blood pressure readings in the physician's office. Another hypothesis holds that there is a direct pressor effect of alcohol caused by an unknown mechanism. Studies that have examined levels of renin, angiotensin, norepinephrine, antidiuretic hormone, cortisol, and other pressor mediators have been inconclusive. Newer hypotheses include increased intracellular Ca2+ levels with a subsequent increase in vascular reactivity, stimulation of the endothelium to release endothelin, and inhibition of endothelium-dependent nitric oxide production (Grogan and Kochar, 1994). The prevalence of hypertension attributable to excess alcohol consumption is not known, but studies suggest a range of 5% to 11%. The prevalence probably is higher for men than for women because of higher alcohol consumption by men. A reduction or cessation of alcohol use in heavy drinkers may reduce the need for antihypertensive medication or reduce the blood pressure to the normal range. A safe amount of alcohol consumption for hypertensive patients who are light drinkers (one to two drinks per occasion, and less than 14 per week) has not been determined. Factors to consider are a personal history of ischemic heart disease, a history of binge drinking, or a family history of alcoholism or of cerebrovascular accident. Hypertensive patients with any of these risk factors should abstain from alcohol use. Cardiac Arrhythmias Alcohol has a number of pharmacological effects on cardiac conduction, including prolongation of the QT interval, prolongation of ventricular repolarization, and sympathetic stimulation (Rossinen et al., 1999; Kupari and Koskinen, 1998). Atrial arrhythmias associated with chronic alcohol use include supraventricular tachycardia, atrial fibrillation, and atrial flutter. Some 15% to 20% of idiopathic cases of atrial fibrillation may be induced by chronic ethanol use (Braunwald, 1997). Ventricular tachycardia may be responsible for the increased risk of unexplained sudden death that has been observed in persons who are alcohol-dependent (Kupari and Koskinen, 1998). During continued alcohol use, treatment of these arrhythmias may be more resistant to cardioversion, digitalis, or Ca2+ channelblocking agents (see Chapter 35: Antiarrhythmic Drugs). Patients with recurrent or refractory atrial arrhythmias should be questioned carefully about alcohol use. Cardiomyopathy Ethanol is known to have dose-related toxic effects on both skeletal and cardiac muscle (Preedy et al., 1994). Numerous studies have shown that alcohol can depress cardiac contractility and lead to cardiomyopathy (Thomas et al., 1994). Echocardiography demonstrates global hypokinesis. Fatty

acid ethyl esters (formed from the enzymatic reaction of ethanol with free fatty acids) appear to play a role in the development of this disorder (Beckemeier and Bora, 1998). Approximately half of all patients with idiopathic cardiomyopathy are alcohol-dependent. Although the clinical signs and symptoms of idiopathic- and alcohol-induced cardiomyopathy are similar, alcohol-induced cardiomyopathy has a better prognosis if patients are able to stop drinking. Women are at greater risk than are men (Urbano-Marquez et al., 1995). As 40% to 50% of persons with alcohol-induced cardiomyopathy who continue to drink die within 3 to 5 years, abstinence remains the primary treatment. Some patients respond to diuretics, angiotensin converting enzyme inhibitors, and vasodilators. Stroke Clinical studies indicate a higher than normal incidence of hemorrhagic and ischemic stroke in persons who drink more than 40 to 60 grams of alcohol per day (Hansagi et al., 1995). Many cases of stroke follow prolonged binge drinking, especially when stroke occurs in younger patients. Proposed etiological factors include alcohol-induced (1) cardiac arrhythmias and associated thrombus formation; (2) high blood pressure and subsequent cerebral artery degeneration; (3) acute increases in systolic blood pressure and alteration in cerebral artery tone; and (4) head trauma. The effects on hemostasis, fibrinolysis, and blood clotting are variable and could prevent or precipitate acute stroke (Numminen et al., 1996). The effects of alcohol on the formation of intracranial aneurysms are controversial, but the statistical association disappears when one controls for tobacco use and gender (Qureshi et al., 1998). Skeletal Muscle Alcohol has a number of effects on skeletal muscle (Panzak et al., 1998). Chronic, heavy, daily alcohol consumption is associated with decreased muscle strength even when studies are controlled for other factors such as age, nicotine use, or chronic illness (Clarkson and Reichsman, 1990). Heavy doses of alcohol also cause irreversible damage to muscle, reflected by a marked increase in the activity of creatine phosphokinase in plasma. Muscle biopsies from heavy drinkers also reveal decreased levels of glycogen stores and pyruvate kinase activity (Vernet et al., 1995). Approximately 50% of chronic heavy drinkers have evidence of type II fiber atrophy. These changes correlate with reductions in muscle protein synthesis and serum carnosinase activities (Wassif et al., 1993). Most patients with chronic alcoholism show electromyographical changes, and many show evidence of a skeletal myopathy similar to alcoholic cardiomyopathy (FernandezSola et al., 1994). Body Temperature Ingestion of ethanol causes a feeling of warmth, because alcohol enhances cutaneous and gastric blood flow. Increased sweating also may occur. Heat, therefore, is lost more rapidly and the internal temperature falls. After consumption of large amounts of ethanol, the central temperature-regulating mechanism itself becomes depressed, and the fall in body temperature may become pronounced. The action of alcohol in lowering body temperature is greater and more dangerous when the ambient environmental temperature is low. Studies of hypothermia deaths suggest that alcohol is a major risk factor in these events (Kortelainen, 1991). Patients with ischemic limbs secondary to peripheral vascular disease are particularly susceptible to cold damage (Proano and Perbeck, 1994). Diuresis Alcohol inhibits the release of vasopressin (antidiuretic hormone; see Chapter 30: Vasopressin and

Other Agents Affecting the Renal Conservation of Water) from the posterior pituitary gland, resulting in enhanced diuresis (Leppaluoto et al., 1992). This may be complemented by ethanolinduced increases in plasma levels of atrial natriuretic peptide (Colantonio et al., 1991). Alcoholics have less urine output than do control subjects in response to a challenge dose with ethanol, suggesting that tolerance develops to the diuretic effects of ethanol (Collins et al., 1992). Alcoholics withdrawing from alcohol exhibit increased vasopressin release and a consequent retention of water, as well as dilutional hyponatremia. Gastrointestinal System Esophagus Alcohol frequently is either the primary etiologic factor or one of multiple causal factors associated with esophageal dysfunction. Ethanol also is associated with the development of esophageal reflux, Barrett's esophagus, traumatic rupture of the esophagus, Mallory-Weiss tears, and esophageal cancer. When compared to nonalcoholic nonsmokers, alcohol-dependent patients who smoke have a tenfold increased risk of developing cancer of the esophagus. There is little change in esophageal function at low blood alcohol concentrations, but at higher blood alcohol concentrations, a decrease in peristalsis and decreased lower esophageal sphincter pressure occur. Patients with chronic reflux esophagitis may respond to proton pump inhibitors (see Chapter 37: Agents Used for Control of Gastric Acidity and Treatment of Peptic Ulcers and Gastroesophageal Reflux Disease). Stomach Heavy alcohol use can disrupt the gastric mucosal barrier and cause acute and chronic gastritis. Ethanol appears to stimulate gastric secretions by exciting sensory nerves in the buccal and gastric mucosa and promoting the release of gastrin and histamine. Beverages containing more than 40% alcohol also have a direct toxic effect on gastric mucosa. While these effects are seen most often in chronic heavy drinkers, they can occur after moderate and/or short-term alcohol use. Clinical symptoms include acute epigastric pain that is relieved with antacids or histamine H 2-receptor blockers (see Chapter 37: Agents Used for Control of Gastric Acidity and Treatment of Peptic Ulcers and Gastroesophageal Reflux Disease). The diagnosis may not be clear, because many patients have normal endoscopic examinations and upper gastrointestinal radiographs. Alcohol is not thought to play a role in the pathogenesis of peptic ulcer disease. Unlike acute and chronic gastritis, peptic ulcer disease is not more common in alcoholics. Nevertheless, alcohol exacerbates the clinical course and severity of ulcer symptoms. It appears to act synergistically with Helicobacter pylori to delay healing (Lieber, 1997a). Acute bleeding from the gastric mucosa, while uncommon, can be a life-threatening emergency. Upper gastrointestinal bleeding more commonly is associated with esophageal varices, traumatic rupture of the esophagus, and clotting abnormalities. Intestines Many alcoholics have chronic diarrhea as a result of malabsorption in the small intestine (Addolorato et al., 1997). The major symptom is frequent loose stools. The rectal fissures and pruritis ani that are frequently associated with heavy drinking probably are related to chronic diarrhea. The diarrhea is caused by structural and functional changes in the small intestine (Papa et al., 1998); the intestinal mucosa has flattened villi, and digestive enzyme levels are often decreased. These changes frequently are reversible after a period of abstinence. Treatment is based on replacing essential vitamins and electrolytes, slowing transit time with an agent such as loperamide (see Chapter 39: Agents Used for Diarrhea, Constipation, and Inflammatory Bowel Disease; Agents

Used for Biliary and Pancreatic Disease), and abstaining from all alcoholic beverages. Patients with severe magnesium deficiencies (serum magnesium less than 1.0 mEq/liter) or symptomatic patients (a positive Chvostek's sign or asterixis) should have replacement with 1 gram of magnesium sulfate intravenously or intramuscularly every four hours until the serum magnesium is greater than 1.0 mEq/liter (Sikkink and Fleming, 1992). Pancreas Heavy alcohol use is the most common cause of both acute and chronic pancreatitis in the United States. While pancreatitis has been known to occur after a single episode of heavy alcohol use, prolonged heavy drinking is common in most cases. Acute alcoholic pancreatitis is characterized by the abrupt onset of abdominal pain, nausea, vomiting, and increased levels of serum or urine pancreatic enzymes. Computed tomography is being used increasingly for diagnostic testing. While most attacks are not fatal, hemorrhagic pancreatitis can develop and lead to shock, renal failure, respiratory failure, and death. Management usually involves intravenous fluid replacementoften with nasogastric suctionand opioid pain medication. The etiology of acute pancreatitis probably is related to a direct toxic-metabolic effect of alcohol on pancreatic acinar cells. Fatty acid esters and cytokines appear to play a major role (Schenker and Montalvo, 1998). Two-thirds of patients with recurrent alcoholic pancreatitis will develop chronic pancreatitis. Chronic pancreatitis is treated by replacing the endocrine and exocrine deficiencies that result from pancreatic insufficiency. The development of hyperglycemia often requires insulin for control of blood-sugar levels. Pancreatic enzyme capsules containing lipase, amylase, and proteases may be necessary to treat malabsorption. The average lipase dose is 4000 units to 24,000 units with each meal and snack. Many patients with chronic pancreatitis develop a chronic pain syndrome. While opioids may be helpful, nonnarcotic methods for pain relief such as antiinflammatory drugs, tricyclic antidepressants, exercise, relaxation techniques, and self-hypnosis are preferred treatments for this population, since cross-dependence to other drugs is not uncommon among alcoholics. Treatment contracts and frequent assessments for signs of addiction are important for patients receiving chronic opioid therapy for chronic pancreatitis since alcohol-dependent patients receiving chronic opioid therapy are at greater risk for narcotic addiction than are nonalcoholic patients. Liver Ethanol produces a constellation of dose-related deleterious effects in the liver (Fickert and Zatloukal, 2000). The primary effects are fatty infiltration of the liver, hepatitis, and cirrhosis. Because of its intrinsic toxicity, alcohol can injure the liver in the absence of dietary deficiencies (Lieber, 1994). The accumulation of fat in the liver is an early event and can occur in normal individuals after the ingestion of relatively small amounts of ethanol. This accumulation results from inhibition of both the tricarboxylic acid cycle and the oxidation of fat, in part owing to the generation of excess NADH produced by the actions of alcohol dehydrogenase and aldehyde dehydrogenase. Fibrosis, resulting from tissue necrosis and chronic inflammation, is the underlying cause of alcoholic cirrhosis. Normal liver tissue is replaced by fibrous tissue. Alcohol can affect directly stellate cells in the liver, causing deposition of collagen around terminal hepatic venules (Worner and Lieber, 1985). Chronic alcohol use is associated with transformation of stellate cells into collagen-producing, myofibroblast-like cells (Lieber, 1998). The histologic hallmark of alcoholic cirrhosis is the formation of Mallory bodies, which are thought to be related to an altered cytokeratin intermediate cytoskeleton (Denk et al., 2000). A number of underlying molecular

mechanisms have been proposed. Phospholipids are a primary target of peroxidation and can be altered by alcohol in nonhuman primate models. Phosphatidylcholine levels are decreased in hepatic mitochondria and are associated with decreased oxidase activity and oxygen consumption (Lieber et al., 1994a,b). Cytokines, such as transforming-growth factor and tumor-necrosis factor , can increase rates of fibrinogenesis and fibrosis within the liver (McClain et al., 1993). Acetaldehyde is thought to have a number of adverse effects including depletion of glutathione (Lieber, 2000), depletion of vitamins and trace metals, and decreased transport and secretion of proteins owing to inhibition of tubulin polymerization (Lieber, 1997b). Acetaminophen-induced hepatic toxicity (see Chapter 27: Analgesic-Antipyretic and Antiinflammatory Agents and Drugs Employed in the Treatment of Gout) has been associated with alcoholic cirrhosis as a result of alcohol-induced increases in microsomal production of toxic acetaminophen metabolites (Whitcomb and Block, 1994; Seeff et al., 1986). Persons who are alcohol dependent may take large amounts of acetaminophen because of chronic pain. Alcohol also appears to increase intracellular free hydroxy-ethyl radical formation (Mantle and Preedy, 1999), and there is evidence that endotoxins may play a role in the initiation and exacerbation of alcohol-induced liver disease (Bode et al., 1987). Hepatitis C appears to be an important cofactor in the development of end-stage alcoholic liver disease (Regev and Jeffers, 1999). Several strategies to treat alcoholic liver disease have been evaluated. Prednisolone may improve survival in patients with hepatic encephalopathy (Lieber, 1998). Nutrients such as Sadenosylmethionine and polyunsaturated lecithin have been found to have beneficial effects in nonhuman primates and are undergoing clinical trials. Other medications that have been tested include oxandrolone, propythiouracil (Orrego et al., 1987), and colchicine (Lieber, 1997b). At present, however, none of these drugs is approved by the United States Food and Drug Administration (FDA) for the treatment of alcoholic liver disease. The current primary treatment for liver failure, including alcoholic liver disease, is transplantation. Long-term outcome studies suggest that patients who are alcohol dependent have survival rates similar to those of patients with other types of liver disease. Alcoholics with hepatitis C may respond to interferon (McCullough and O'Connor, 1998). Vitamins and Minerals The almost complete lack of protein, vitamins, and most other nutrients in alcoholic beverages predisposes those who consume large quantities of alcohol to nutritional deficiencies. Alcoholics often present with these deficiencies due to decreased intake, decreased absorption, or impaired utilization of nutrients. The peripheral neuropathy, Korsakoff's psychosis, and Wernicke's encephalopathy seen in alcoholics probably are caused by deficiencies of the B-complex of vitamins (particularly thiamine), although direct toxicity produced by alcohol itself has not been ruled out (Harper, 1998). Liver failure secondary to cirrhosis, resulting in impaired clearance of toxins, also may result in alcohol-induced brain damage. Chronic alcohol abuse decreases the dietary intake of retinoids and carotenoids and enhances the metabolism of retinol by the induction of degradative enzymes (Leo and Lieber, 1999). Retinol and ethanol compete for metabolism by alcohol dehydrogenases; vitamin A supplementation therefore should be monitored carefully in alcoholics when they are consuming alcohol to avoid retinol-induced hepatotoxicity. The chronic consumption of alcohol inflicts an oxidative stress on the liver due to generation of free radicals, contributing to ethanol-induced liver injury. The antioxidant effects of -tocopherol (vitamin E) may ameliorate some of this ethanol-induced toxicity in the liver (Nordmann, 1994). Plasma levels of -tocopherol often are reduced in myopathic alcoholics compared to alcoholic patients without

myopathy. Chronic alcohol consumption has been implicated in osteoporosis. The reasons for this decreased bone mass remain unclear, although impaired osteoblastic activity has been implicated. Acute administration of ethanol produces an initial reduction in serum parathyroid hormone (PTH) and Ca2+ levels, followed by a rebound increase in PTH that does not restore Ca 2+ levels to normal. The hypocalcemia observed after chronic alcohol intake also appears to be unrelated to effects of alcohol on PTH levels, and alcohol likely inhibits bone remodeling by a mechanism independent of Ca2+-regulating hormones (Sampson, 1997). Vitamin D also may play a role. Since vitamin D requires hydroxylation in the liver for activation, alcohol-induced liver damage can indirectly affect the role of vitamin D in the intestinal and renal absorption of Ca2+. Alcoholics tend to have lowered serum and brain levels of magnesium, which may contribute to their predispositions to brain injuries such as stroke (Altura and Altura, 1999). Deficits in intracellular magnesium levels may disturb cytoplasmic and mitochondrial bioenergetic pathways, potentially leading to calcium overload and ischemia. Although there is general agreement that total magnesium levels are decreased in alcoholics, it is less clear that this also applies to ionized magnesium, the physiologically active form (Hristova et al., 1997). Magnesium sulfate is sometimes used in the treatment of alcohol withdrawal, but its efficacy has been questioned (Erstad and Cotugno, 1995). Sexual Function Despite the widespread belief that alcohol can enhance sexual activities, the opposite effect is noted more often. Many drugs of abuse, including alcohol, have disinhibiting effects that may lead initially to increased libido. With excessive, long-term use, however, alcohol often leads to a deterioration of sexual function. While alcohol cessation may reverse many sexual problems, patients with significant gonadal atrophy are less likely to respond to discontinuation of alcohol consumption (Sikkink and Fleming, 1992). Alcohol can lead to impotence in men with both acute and chronic use. Increased blood alcohol concentrations lead to decreased sexual arousal, increased ejaculatory latency, and decreased orgasmic pleasure. The incidence of impotence may be as high as 50% in patients with chronic alcoholism. Additionally, many chronic alcoholics will develop testicular atrophy and decreased fertility. The mechanism involved in this is complex and likely involves altered hypothalamic function and a direct toxic effect of alcohol on Leydig cells. Testosterone levels may be depressed, but many men who are alcohol dependent have normal testosterone and estrogen levels. Gynecomastia is associated with alcoholic liver disease and is related to increased cellular response to estrogen and to accelerated metabolism of testosterone. Sexual function in alcohol-dependent women is less clearly understood. Many female alcoholics complain of decreased libido, decreased vaginal lubrication, and menstrual cycle abnormalities. Their ovaries often are small and without follicular development. Some data suggest that fertility rates are lower for alcoholic women. The presence of comorbid disorders such as anorexia nervosa or bulimia is likely to aggravate the problem. The prognosis for men and women who become abstinent is favorable in the absence of significant hepatic or gonadal failure (O'Farrell et al., 1997). Hematological and Immunological Effects Chronic alcohol use is associated with a number of anemias. Microcytic anemia can occur because of chronic blood loss and iron deficiency. Macrocytic anemias and increases in mean corpuscular

volume are common and may occur in the absence of vitamin deficiencies. Normochromic anemias also can occur due to effects of chronic illness on hematopoiesis. In the presence of severe liver disease, morphological changes can include the development of burr cells, schistocytes, and ring sideroblasts. Alcohol-induced sideroblastic anemia may respond to vitamin B6 replacement (Wartenberg, 1998). Alcohol use also is associated with reversible thrombocytopenia. Platelet counts under 20,000 are rare. Bleeding is uncommon unless there is an alteration in vitamin K1dependent clotting factors. Proposed mechanisms focus on platelet trapping in the spleen and marrow. Alcohol also affects granulocytes and lymphocytes (Schirmer et al., 2000). Effects include leukopenia, alteration of lymphocyte subsets, decreased T-cell mitogenesis, and changes in immunoglobulin production. These disorders may play a role in alcohol-related liver disease. In some patients, a depression of leukocyte migration into inflamed areas may account in part for the poor resistance of alcoholics to some types of infection (i.e., Klebsiella pneumonia, listeriosis, tuberculosis). Alcohol consumption also may alter the distribution and function of lymphoid cells by disrupting cytokine regulation, in particular that involving interleukin-2 (IL-2). Alcohol appears to play a role in the development of HIV infection. In vitro studies with human lymphocytes suggest that alcohol can suppress CD4 T-lymphocyte function and concanavalin-Astimulated IL-2 production and enhance in vitro replication of HIV. Moreover, persons who abuse alcohol have higher rates of high-risk sexual behavior. Teratogenic Effects: Fetal Alcohol Syndrome Although long suspected to be true, the possibility that alcohol consumption during pregnancy has deleterious consequences for the offspring has been examined rigorously only in the latter half of the twentieth century. In 1968, French researchers first noted that children born to alcoholic mothers displayed a common pattern of distinct dysmorphology that later came to be known as fetal alcohol syndrome (FAS) (Lemoine et al., 1968; Jones and Smith, 1973). The diagnosis of FAS typically is based on the observance of a triad of abnormalities in the newborn, including (1) a cluster of craniofacial abnormalities, (2) CNS dysfunction, and (3) pre- and/or postnatal stunting of growth. Hearing, language, and speech disorders also may become evident as the child ages (Church and Kaltenbach, 1997). Children who do not meet all the criteria for a diagnosis of FAS still may show physical and/or mental deficits consistent with a partial phenotype, termed fetal alcohol effects (FAEs) or alcohol-related neurodevelopmental disorders (ARNDs). The incidence of FAS is believed to be in the range of 0.5 to 1 per thousand live births in the general population, with rates as high as 2 to 3 per thousand in African-American and Native-American populations. A lower socioeconomic status of the mother, rather than racial background per se, appears to be primarily responsible for the higher incidence of FAS observed in those groups (Abel, 1995). The incidence of FAE is likely higher than that of FAS, making alcohol consumption during pregnancy a major public health problem. Craniofacial abnormalities commonly observed in the diagnosis of FAS consist of a pattern of microcephaly, a long and smooth philtrum, shortened palpebral fissures, a flat midface, and epicanthal folds. Magnetic resonance imaging studies demonstrate decreased volumes in the basal ganglia, corpus callosum, cerebrum, and cerebellum (Mattson et al., 1992). The severity of alcohol effects can vary greatly and depends on the drinking patterns and amount of alcohol consumed by the mother. Maternal drinking in the first trimester has been associated with craniofacial abnormalities; facial dysmorphology also is seen in mice exposed to ethanol at the equivalent time in gestation. CNS dysfunction following in utero exposure to alcohol manifests itself in the form of

hyperactivity, attention deficits, mental retardation, and/or learning disabilities. FAS is the most common cause of preventable mental retardation in the western world (Abel and Sokol, 1987), with afflicted children consistently scoring lower than their peers on a variety of IQ tests. It is now clear that FAS represents the severe end of a spectrum of alcohol effects. A number of studies have documented intellectual deficits, including mental retardation, in children not displaying the craniofacial deformities or retarded growth seen in FAS. Although cognitive improvements are seen with time, decreased IQ scores of FAS children tend to persist as they mature, indicating that the deleterious prenatal effects of alcohol are irreversible. Although a correlation exists between the amount of alcohol consumed by the mother and infant scores on mental and motor performance tests, there is considerable diversity in performance on such tests among children of mothers consuming similar quantities of alcohol. It appears that the peak blood-alcohol concentration reached may be a critical factor in determining the severity of deficits seen in the offspring. Although the evidence is not conclusive, there is a suggestion that even moderate alcohol consumption (two drinks per day) in the second trimester of pregnancy is correlated with impaired academic performance of offspring at age 6 (Goldschmidt et al., 1996). Maternal age also may be a factor. Pregnant women over the age of 30 who drink alcohol create greater risks to their children than do younger women who consume similar amounts of alcohol (Jacobson et al., 1996). Children prenatally exposed to alcohol most frequently present with attentional deficits and hyperactivity, even in the absence of intellectual deficits or craniofacial abnormalities. Furthermore, attentional problems have been observed in the absence of hyperactivity, suggesting that the two phenomena are not necessarily related. Fetal alcohol exposure also has been identified as a risk factor for alcohol abuse by adolescents (Baer et al., 1998). Apart from the risk of FAS or FAE to the child, the intake of high amounts of alcohol by a pregnant woman, particularly during the first trimester, greatly increases the chances of spontaneous abortion. Studies with laboratory animals have demonstrated many of the consequences of in utero exposure to ethanol observed in human beings, including hyperactivity, motor dysfunction, and learning deficits. In animals, in utero exposure to ethanol alters the expression patterns of a wide variety of proteins, changes neuronal migration patterns, and results in brain regionspecific and cell type specific alterations in neuronal numbers. Indeed, specific periods of vulnerability may exist for the various neuronal populations in the brain. Genetics also may play a role in determining vulnerability to ethanol; there are differences among strains of rats in susceptibility to the prenatal effects of ethanol. Finally, multidrug abuse, such as the concomitant administration of cocaine with ethanol, enhances fetal damage and mortality. Acute Ethanol Intoxication An increased reaction time, diminished fine motor control, impulsivity, and impaired judgment become evident when the concentration of ethanol in the blood is 20 to 30 mg/dl. More than 50% of persons are grossly intoxicated by a concentration of 150 mg/dl. In fatal cases, the average concentration is about 400 mg/dl, although alcohol-tolerant individuals often can withstand these blood alcohol levels. The definition of intoxication varies by state and country. In the United States, most states set the ethanol level defined as intoxication at 80 to 100 mg/dl. There is increasing evidence that lowering the limit to 50 to 80 mg/dl can reduce motor vehicle injuries and fatalities significantly. While alcohol can be measured in saliva, urine, sweat, and blood, measurement of levels in exhaled air remain the primary method of assessing the level of intoxication. Many factors, such as body weight and composition and the rate of absorption from the gastrointestinal tract, determine the concentration of ethanol in the blood after ingestion of a given amount of ethanol. On average, the ingestion of three standard drinks (42 grams of alcohol) on an

empty stomach results in a maximum blood concentration of 67 to 92 mg/dl in men. After a mixed meal, the maximal blood concentration from three drinks is 30 to 53 mg/dl in men. Concentrations of alcohol in blood will be higher in women than in men consuming the same amount of alcohol because, on average, women are smaller than men, have less body water per unit of weight into which ethanol can distribute, and have less gastric alcohol dehydrogenase activity than men. For individuals with normal hepatic function, ethanol is metabolized at a rate of one standard drink every 60 to 90 minutes. The characteristic signs and symptoms of alcohol intoxication are well known. Nevertheless, an erroneous diagnosis of drunkenness may occur with patients who appear inebriated but who have not ingested ethanol. Diabetic coma, for example, may be mistaken for severe alcoholic intoxication. Drug intoxication, cardiovascular accidents, and skull fractures also may be confused with alcohol intoxication. The odor of the breath of a person who has consumed ethanol is due not to ethanol vapor but to impurities in alcoholic beverages. Breath odor in a case of suspected intoxication can be misleading, as there can be other causes of breath odor similar to that after alcohol consumption. Blood alcohol levels are necessary to confirm the presence or absence of alcohol intoxication (Schuckit, 1995). The treatment of acute alcohol intoxication is based on the severity of respiratory and CNS depression. Acute alcohol intoxication can be a medical emergency, and a number of young people die every year from this disorder. Patients who are comatose and who exhibit evidence of respiratory depression should be intubated to protect the airway and to provide ventilatory assistance. The stomach may be lavaged, but care must be taken to prevent pulmonary aspiration of the return flow. Since ethanol is freely miscible with water, ethanol can be removed from blood by hemodialysis (Schuckit, 1995). Acute alcohol intoxication is not always associated with coma, and careful observation is the primary treatment. Usual care involves observing the patient in the emergency room for 4 to 6 hours while the patient's tissues metabolize the ingested ethanol. Blood alcohol levels will be reduced at a rate of about 15 mg/dl per hour. During this period, some individuals may display extremely violent behavior. Sedatives and antipsychotic agents have been employed to quiet such patients. Great care must be taken, however, when using sedatives to treat patients who have ingested an excessive amount of another CNS depressant, i.e., ethanol. Clinical Uses of Ethanol Dehydrated alcohol may be injected in the close proximity of nerves or sympathetic ganglia to relieve the long-lasting pain related to trigeminal neuralgia, inoperable carcinoma, and other conditions. Epidural, subarachnoid, and lumbar paravertebral injections of ethanol also have been employed for inoperable pain. For example, lumbar paravertebral injections of ethanol may destroy sympathetic ganglia and thereby produce vasodilation, relieve pain, and promote healing of lesions in patients with vascular disease of the lower extremities. Systemically administered ethanol is confined to the treatment of poisoning by methyl alcohol and ethylene glycol (see Chapter 68: Nonmetallic Environmental Toxicants: Air Pollutants, Solvents and Vapors, and Pesticides). The accidental or intentional consumption of methanol leads to retinal and optic nerve damage, potentially resulting in blindness. Formic acid, a metabolite of methanol, is responsible for the toxicity. Treatment consists of sodium bicarbonate to combat acidosis, hemodialysis, and the administration of ethanol, which competes with methanol for metabolism by alcohol dehydrogenase.

The use of alcohol to treat patients in alcohol withdrawal or obstetrical patients with premature contractions is no longer recommended. Some medical centers continue to use alcohol to prevent or reduce the risk of alcohol withdrawal in postoperative patients, but administering a combination of a benzodiazepine with haloperidol or clonidine may be more appropriate (Spies and Rommelspacher, 1999). Mechanisms of CNS Effects of Ethanol Acute Intoxication Alcohol disturbs the fine balance that exists between excitatory and inhibitory influences in the brain, resulting in the anxiolysis, ataxia, and sedation that follow alcohol consumption. This is accomplished by either enhancing inhibitory or antagonizing excitatory neurotransmission. Although ethanol was long thought to act nonspecifically by disordering lipids in cell membranes, it is now believed that proteins constitute the primary molecular sites of action for ethanol. A number of putative sites at which ethanol may act have been identified, and ethanol likely produces its effects by simultaneously altering the functioning of a number of proteins that can affect neuronal excitability. A key issue has been to identify those proteins that determine neuronal excitability and are sensitive to ethanol at the low concentrations (5 to 20 mM) that produce behavioral effects. Ion Channels A number of different types of ion channels in the CNS are sensitive to ethanol, including representatives of the ligand-gated and G proteinregulated channel families and voltage-gated ion channels. The primary mediators of inhibitory neurotransmission in the brain are the ligand-gated GABAA receptors (see Chapter 12: Neurotransmission and the Central Nervous System), whose function is markedly enhanced by a number of classes of sedative, hypnotic, and anesthetic agents including barbiturates, benzodiazepines, and volatile anesthetics (Mehta and Ticku, 1999). Substantial biochemical, electrophysiological, and behavioral data implicate the GABAA receptor as an important target for the in vivo actions of ethanol. The GABAA -receptor antagonist bicuculline as well as antagonists at the benzodiazepine binding site on GABAA receptors decrease alcohol consumption in animal models (Harris et al., 1998). Furthermore, administration of the GABAAreceptor agonist muscimol into specific regions of the limbic system in rats can substitute for ethanol in discrimination studies (Mihic, 1999). Phosphorylation, particularly by protein kinase C (PKC), appears to play a major role in determining the GABAA receptor's sensitivity to ethanol. Neuronal nicotinic acetylcholine receptors (see Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia) also may be prominent molecular targets of alcohol action (Narahashi et al., 1999). Both enhancement and inhibition of nicotinic acetylcholine receptor function have been reported, depending on receptor subunit concentration and the concentrations of ethanol tested. Effects of ethanol on these receptors may be particularly important, as there is an observed association between smoking and alcohol consumption in human beings (Collins, 1990). Furthermore, several studies indicate that nicotine increases alcohol consumption in animal models (Smith et al., 1999). Another member of the cation-selective ion-channel superfamily of receptors is the serotonin 5-HT3 receptor (see Chapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists). Electrophysiological studies demonstrate enhancement by ethanol of 5-HT3receptor function (Lovinger, 1999). Excitatory ionotropic glutamate receptors are divided into the NMDA and nonNMDA receptor classes, with the latter being composed of kainate- and AMPA-receptor subtypes (see Chapter 12: Neurotransmission and the Central Nervous System). Ethanol inhibits the function of the NMDA-

and kainate-receptor subtypes, whereas AMPA receptors are largely resistant to alcohol (Weiner et al., 1999). As with the GABAA receptors, phosphorylation of the glutamate receptor can determine sensitivity to ethanol. The nonreceptor tyrosine kinase Fyn phosphorylates NMDA receptors, rendering them less sensitive to inhibition by ethanol (Anders et al., 1999) and perhaps explaining why null mutant mice lacking Fyn display significantly greater sensitivity to the hypnotic effects of ethanol. NMDA receptors play a crucial role in the development of long-term potentiation (LTP), a form of neuronal plasticity that may constitute a cellular substrate for memory. Ethanol inhibits LTP, although this does not appear to be accomplished solely through inhibition of NMDA receptors (Schummers et al., 1997). Although considerable research effort has been expended on the ligand-gated ion channels, a number of other types of channels recently have been found to be sensitive to alcohol at concentrations routinely achieved in vivo. Ethanol enhances the activity of large conductance, calcium-activated potassium channels in neurohypophyseal terminals (Dopico et al., 1999), perhaps contributing to the reduced release of oxytocin and vasopressin after ethanol consumption. Ethanol also inhibits N- and P/Q-type Ca2+ channels in a manner that can be antagonized by channel phosphorylation by protein kinase A (PKA) (Solem et al., 1997). Finally, G proteincoupled, inwardly rectifying potassium channels, which regulate synaptic transmission and neuronal firing rates, exhibit enhanced function in the presence of low concentrations of ethanol (Lewohl et al., 1999; Kobayashi et al., 1999). Kinases and Signaling Enzymes As mentioned above, phosphorylation by a number of protein kinases can affect the functioning of many receptors. The behavioral consequences of this were illustrated in null mutant mice lacking the isoform of PKC; these mice display reduced effects of ethanol measured behaviorally and a loss of enhancement by ethanol of GABA's effects measured in vitro (Harris et al., 1995). There is some uncertainty as to whether or not ethanol directly interacts with PKC. Some investigators have reported inhibition of function, while others have seen no effect (Stubbs and Slater, 1999), perhaps due to differential sensitivity to ethanol of specific PKC isoforms. Intracellular signal transduction cascades, such as those involving MAP and tyrosine kinases and neurotrophic factor receptors, also are thought to be affected by ethanol (Valenzuela and Harris, 1997). Translocation of PKC and PKA between subcellular compartments also is sensitive to alcohol (Constantinescu et al., 1999). Ethanol enhances the activities of some of the nine isoforms of adenylyl cyclase, with the type VII isoform being the most sensitive (Tabakoff and Hoffman, 1998). This promotes increased production of cyclic AMP and thus increased activity of PKA. Ethanol's actions appear to be mediated by activation of the stimulatory G protein Gs as well as by promotion of the interaction between the G protein and the catalytic moiety of adenylyl cyclase. Decreased adenylyl cyclase activities have been reported in alcoholics (Parsian et al., 1996) and even in nondrinkers with family histories of alcoholism, suggesting that lowered adenylyl cyclase activity may be a trait marker for alcoholism (Menninger et al., 1998). Tolerance and Dependence Tolerance is defined as a reduced behavioral or physiological response to the same dose of ethanol (see Chapter 24: Drug Addiction and Drug Abuse). There is a marked acute tolerance that is detectable soon after administration of ethanol. Acute tolerance can be demonstrated by measuring behavioral impairment at the same BALs on the ascending limb of the absorption phase of the BAL-time curve (minutes after ingestion of ethanol) and on the descending limb of the curve as BALs are lowered by metabolism (one or more hours after ingestion). Behavioral impairment and

subjective feelings of intoxication are much greater at a given BAL on the ascending than on the descending limb. There also is a chronic tolerance that develops in the long-term heavy drinker. In contrast to acute tolerance, chronic tolerance often has a metabolic component due to induction of alcohol-metabolizing enzymes. Physical dependence is demonstrated by the elicitation of a withdrawal syndrome when alcohol consumption is terminated. The symptoms and severity are determined by the amount and duration of alcohol consumption and include sleep disruption, autonomic nervous system (sympathetic) activation, sleeplessness, tremors, and, in severe cases, seizures. In addition, two or more days after withdrawal, some individuals experience delirium tremens, characterized by hallucinations, delirium, fever, and tachycardia. This is sometimes fatal. Another aspect of dependence is craving and drug-seeking behavior, often termed psychological dependence. Ethanol tolerance and physical dependence are readily studied in animal models. Lines of mice with genetic differences in tolerance and dependence have been characterized, and a search for the relevant genes is under way (Crabbe et al., 1999). Neurobiological mechanisms of tolerance and dependence are not understood completely, but chronic alcohol consumption results in changes in synaptic and intracellular signaling, likely due to changes in gene expression. Most of the systems that are acutely affected by ethanol also are affected by chronic exposure, resulting in an adaptive or maladaptive response that can cause tolerance and dependence. In particular, chronic actions of ethanol likely require changes in signaling by glutamate and GABA receptors and intracellular systems such as PKC (Diamond and Gordon, 1997). There is an increase in NMDA receptor function after chronic alcohol ingestion, which may contribute to the CNS hyperexcitability and neurotoxicity seen during ethanol withdrawal (Tabakoff and Hoffman, 1996; Chandler et al., 1998). Arginine vasopressin, acting on V 1 receptors, maintains tolerance to ethanol in laboratory animals even after chronic ethanol administration has ceased (Hoffman et al., 1990). The neurobiological basis of the switch from controlled, volitional alcohol use to compulsive and uncontrolled addiction remains obscure. Impairment of the dopaminergic reward system and the resultant increase in alcohol consumption in an attempt to regain activation of the system is a possibility. In addition, the prefrontal cortex is particularly sensitive to damage from alcohol abuse and influences decision making and emotion, processes clearly compromised in the alcoholic (Pfefferbaum et al., 1998). Thus, impairment of executive function in cortical regions by chronic alcohol consumption may be responsible for some of the lack of judgment and control that is expressed as obsessive alcohol consumption. It should be reiterated that the loss of brain volume and impairment of function seen in the chronic alcoholic is at least partially reversible by abstinence but will worsen with continued drinking (Pfefferbaum et al., 1998). Obviously, early diagnosis and treatment of alcoholism is important, as it can limit the brain damage that promotes the spiraling descent into progressively severe addiction. Genetic Influences The concept of alcoholism as a disease was first articulated by Jellinek in 1960; the subsequent acceptance of alcoholism and addiction as "brain diseases" led to a search for biological causes. Studies of rats and mice carried out in Chile, Finland, and the United States showed significant heritabilities (roughly 60%) for many behavioral actions of alcohol, including sedation, ataxia, and, most notably, consumption (Crabbe and Harris, 1991). It has long been appreciated that alcoholism "runs in families," and definitive studies of genetics and human alcoholism appeared about 30 years ago. A series of adoption (cross-fostering) and twin studies showed that human alcohol dependence has a genetic component. It is important to note that, although the genetic contribution has varied among studies, it is generally in the range of 40% to 60%, which means that environmental

variables also are critical for individual susceptibility to alcoholism (Begleiter and Kissin, 1995). The search for the genes and alleles responsible for alcoholism is complicated by the polygenetic nature of the disease and the general difficulty in defining the multiple genes responsible for complex diseases. One area of research that has been fruitful has been the study of why some populations (mainly Asian) are protected from alcoholism. This has been found to be caused by genetic differences in alcohol- and aldehyde-metabolizing enzymes. Specifically, genetic variants of alcohol dehydrogenase that exhibit high activity and variants of aldehyde dehydrogenase that exhibit low activity protect against heavy drinking. This is because alcohol consumption by individuals who have these variants results in accumulation of acetaldehyde, which produces a variety of unpleasant effects (Li, 2000). These effects are similar to those of disulfiram therapy (see below), but the prophylactic, genetic form of inhibition of alcohol consumption is more effective than the pharmacotherapeutic approach, which is applied after alcoholism has developed. In contrast to these protective genetic variants, there is little information about genes responsible for increased risk for alcoholism. The recent history of psychiatric genetics is that genes identified in one study are not consistently found in other populations. This also is true for alcoholism. Sequence differences in several candidate genes from alcoholics, including genes for a dopamine receptor (D2) and for serotonin-related transporters and enzymes, are not consistently different from the sequences of those genes from nonalcoholic subjects. Several large-scale genetic studies of alcoholism currently are in progress, and these efforts, together with genetic studies in laboratory animals, should lead to identification of alcoholism susceptibility genes. It is possible that these studies also will allow genetic classification of subtypes of alcoholism and thereby resolve some of the inconsistencies among study populations. For example, antisocial alcoholism is linked with a polymorphism in a serotonin receptor (5HT1B), but there is no association of this gene with nonantisocial alcoholism (Lappalainen et al., 1998). Another approach to understanding the inherited biology of alcoholism is to ask what behavioral or functional differences exist between individuals with high and low genetic risks for alcoholism. This may be accomplished by studying young social drinkers with many or few alcoholic relatives [family history positive (FHP) and family history negative (FHN)]. Brain imaging by positron emission tomography has been used in this context. A family history of alcoholism is linked to lower cerebellar metabolism and a blunted effect of a benzodiazepine (lorazepam) on cerebellar metabolism (Volkow et al., 1995). Because GABAA receptors are the molecular site of benzodiazepine action, these results suggest that a genetic predisposition to alcoholism may be reflected in abnormal function of GABAA receptors. Schuckit and colleagues have studied actions of alcohol in FHP college students and have followed the study subjects for almost 20 years to determine which ones will develop alcoholism or alcohol abuse. It is remarkable that a blunted behavioral and physiological response to alcohol in the original test is associated with a significantly greater risk for later development of alcohol-related problems (Schuckit, 1994). The genes that control initial sensitivity to ethanol are not known, but they may be important for risk for alcohol abuse. At present, there is little evidence that the genes important for alcoholism also are important for other addictions and diseases with the exception of tobacco use. Studies with twins indicate a common genetic vulnerability for alcohol and nicotine dependence (True et al., 1999), which is consistent with the high rate of smoking among alcoholics. Pharmacotherapy of Alcoholism Currently, only two drugs are approved in the United States for treatment of alcoholism: disulfiram (ANTABUSE) and naltrexone (REVIA ). Disulfiram has a long history of use but has fallen out of

favor because of its side effects and compliance problems. Naltrexone was introduced more recently. The goal of both of these medications is to assist the patient in maintaining abstinence. Naltrexone Naltrexone was approved by the FDA for treatment of alcoholism in 1994. It is chemically related to the highly selective opioid-receptor antagonist naloxone (NARCAN ), but it has higher oral bioavailability and a longer duration of action than does naloxone. Neither drug has appreciable opioid-receptor agonist effects. These drugs were used initially in the treatment of opioid overdose and dependence because of their ability to antagonize all of the actions of opioids (see Chapters 23: Opioid Analgesics and 24: Drug Addiction and Drug Abuse). There were suggestions from both animal research and clinical experience that naltrexone might reduce alcohol consumption and craving, and this was confirmed in clinical trials in the early 1990s (see O'Malley et al., 2000; Johnson and Ait-Daoud, 2000). There is evidence that naltrexone blocks activation by alcohol of dopaminergic pathways in the brain that are thought to be critical to reward. Naltrexone helps to maintain abstinence by reducing the urge to drink and increasing control when a "slip" occurs. It is not a "cure" for alcoholism and does not prevent relapse in all patients. Naltrexone works best when used in conjunction with some form of psychosocial therapy, such as cognitive behavioral therapy (Anton et al., 1999). It is typically administered after detoxification and given at a dose of 50 mg per day for several months. Good compliance is important to ensure the therapeutic value of naltrexone and has proven to be a problem for some patients (Johnson and Ait-Daoud, 2000). The most common side effect of naltrexone is nausea, which is more common in women than in men and subsides if the patients remain abstinent (O'Malley et al., 2000). When given in excessive doses, naltrexone can cause liver damage. It is contraindicated in patients with liver failure or acute hepatitis and should be used only after careful consideration in patients with active liver disease. Nalmefene (REVEX) is another opioid antagonist that appears promising in preliminary clinical tests. It has a number of advantages over naltrexone, including greater oral bioavailability, longer duration of action, and lack of dose-dependent problems with liver toxicity. Disulfiram Disulfiram (tetraethylthiuram disulfide; ANTABUSE) was taken in the course of an investigation of its potential anthelminthic usefulness by two Danish physicians, who became ill at a cocktail party and were quick to realize that the compound had altered their responses to alcohol. They initiated a series of pharmacological and clinical studies that provided the basis for the use of disulfiram as an adjunct in the treatment of chronic alcoholism. Similar responses to alcohol ingestion are produced by various congeners of disulfiram, cyanamide, the fungus Coprinus atramentarius, the hypoglycemic sulfonylureas, metronidazole, certain cephalosporins, and the ingestion of animal charcoal. Disulfiram, given alone, is a relatively nontoxic substance, but it alters the metabolism of alcohol and causes the blood acetaldehyde concentration to rise to five to ten times above the level achieved when ethanol is given to an individual not pretreated with disulfiram. Acetaldehyde, produced as a result of the oxidation of ethanol by alcohol dehydrogenase, ordinarily does not accumulate in the body, because it is further oxidized almost as soon as it is formed, primarily by aldehyde dehydrogenase. Following the administration of disulfiram, both cytosolic and mitochondrial forms of this enzyme are irreversibly inactivated to varying degrees, and the concentration of acetaldehyde rises. It is unlikely that disulfiram itself is responsible for the enzyme inactivation in vivo; several

active metabolites of the drug, especially diethylthiomethylcarbamate, behave as suicide-substrate inhibitors of aldehyde dehydrogenase in vitro. These metabolites reach significant concentrations in plasma following the administration of disulfiram (Johansson, 1992). The ingestion of alcohol by individuals previously treated with disulfiram gives rise to marked signs and symptoms. Within about 5 to 10 minutes, the face feels hot, and soon afterwards it is flushed and scarlet in appearance. As the vasodilation spreads over the whole body, intense throbbing is felt in the head and neck, and a pulsating headache may develop. Respiratory difficulties, nausea, copious vomiting, sweating, thirst, chest pain, considerable hypotension, orthostatic syncope, marked uneasiness, weakness, vertigo, blurred vision, and confusion are observed. The facial flush is replaced by pallor, and the blood pressure may fall to shock levels. Disulfiram and/or its metabolites can inhibit many enzymes with crucial sulfhydryl groups, and it thus has a wide spectrum of biological effects. It inhibits hepatic microsomal drugmetabolizing enzymes and thereby interferes with the metabolism of phenytoin, chlordiazepoxide, barbiturates, and other drugs. Disulfiram by itself is usually innocuous, but it may cause acneform eruptions, urticaria, lassitude, tremor, restlessness, headache, dizziness, a garlic-like or metallic taste, and mild gastrointestinal disturbances. Peripheral neuropathies, psychosis, and acetonemia also have been reported. Alarming reactions may result from the ingestion of even small amounts of alcohol in persons being treated with disulfiram. The use of disulfiram as a therapeutic agent thus is not without danger, and it should be attempted only under careful medical and nursing supervision. Patients must be warned that as long as they are taking disulfiram, the ingestion of alcohol in any form will make them sick and may endanger their lives. Patients must learn to avoid disguised forms of alcohol, as in sauces, fermented vinegar, cough syrups, and even after-shave lotions and back rubs. The drug never should be administered until the patient has abstained from alcohol for at least 12 hours. In the initial phase of treatment, a maximal daily dose of 500 mg is given for 1 to 2 weeks. Maintenance dosage then ranges from 125 to 500 mg daily, depending on tolerance to side effects. Unless sedation is prominent, the daily dose should be taken in the morning, the time when the resolve not to drink may be strongest. Sensitization to alcohol may last as long as 14 days after the last ingestion of disulfiram because of the slow rate of restoration of aldehyde dehydrogenase (Johansson, 1992). Acamprosate Acamprosate (N-acetylhomotaurine, calcium salt), an analog of GABA widely used in Europe for the treatment of alcoholism, is not yet approved for use in the United States. A number of doubleblind, placebo-controlled studies have demonstrated that acamprosate decreases drinking frequency and reduces relapse drinking in abstinent alcoholics. It acts in a dose-dependent manner (1.3 to 2 g/day; Paille et al., 1995) and appears to have efficacy similar to that of naltrexone. Studies in laboratory animals have shown that acamprosate decreases alcohol intake without affecting food or water consumption. Acamprosate generally is well tolerated by patients, with diarrhea being the main side effect (Garbutt et al., 1999). No abuse liability has been noted. The drug undergoes minimal metabolism in the liver, is primarily excreted by the kidneys, and has an elimination halflife of 18 hours after oral administration (Wilde and Wagstaff, 1997). Concomitant use of disulfiram appears to increase the effectiveness of acamprosate, without any adverse drug interactions being noted (Besson et al., 1998). The mechanism of action of acamprosate is obscure, although there is some evidence that it affects the function of the NMDA subtype of ionotropic glutamate receptors in brain. Whether or not this modulation of NMDA receptor function is

responsible for the drug's therapeutic effects is unknown (Johnson and Ait-Daoud, 2000). Other Agents Ondansetron, a 5-HT3-receptor antagonist and antiemetic drug (see Chapters 11: 5Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists and 38: Prokinetic Agents, Antiemetics, and Agents Used in Irritable Bowel Syndrome), reduces alcohol consumption in laboratory animals and currently is being tested in alcoholic subjects. Preliminary findings suggest that ondansetron is effective in the treatment of early-onset alcoholics, who respond poorly to psychosocial treatment alone, although the drug does not appear to work well in other types of alcoholics (Johnson and Ait-Daoud, 2000). Ondansetron administration lowers the amount of alcohol consumed, particularly by drinkers who consume fewer than ten drinks per day (Sellers et al., 1994). It also decreases the subjective effects of ethanol on 6 of 10 scales measured, including the desire to drink (Johnson et al., 1993a), while at the same time not having any effect on the pharmacokinetics of ethanol (Johnson et al., 1993b). Initial studies suggested that lithium or selective serotonin-reuptake inhibitors (SSRIs) might be useful in reducing alcohol consumption, but subsequent clinical trials have not provided evidence for beneficial effects (see Garbutt et al., 1999). There have been limited clinical tests of several dopaminergic agonists and antagonists, serotonergic agonists, and calcium channel antagonists in reducing ethanol consumption, but the results with these agents generally have not been encouraging (Johnson and Ait-Daoud, 2000). An emerging approach is treatment with a combination of two or more drugs, particularly combining drugs with different mechanisms of action (e.g., naltrexone plus ondansetron or acamprosate); this approach may be useful if a limited therapeutic response is obtained with a single agent.

Chapter 19. Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders
Overview Drugs with demonstrated efficacy in a broad range of severe psychiatric disorders have been developed since the 1950s, leading to development of the subspecialty of psychopharmacology. Knowledge of the actions of such agents has greatly stimulated research in biological psychiatry aimed at defining pathophysiological changes. This chapter reviews current knowledge of the pharmacology of antidepressants and the treatment of depression and anxiety disorders. Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania covers antipsychotic and antimanic agents and the treatment of psychotic and manic-depressive illness. The treatment of depression relies on a varied group of antidepressant therapeutic agents, in part because clinical depression is a complex syndrome of widely varying severity. The first agents used successfully were tricyclic antidepressants, which elicit a wide range of neuropharmacological effects in addition to their presumed primary action of inhibiting norepinephrine and, variably, serotonin transport into nerve endings, thus leading to sustained facilitation of noradrenergic and perhaps serotonergic function in the brain. Inhibitors of monoamine oxidase, which increase the brain concentrations of many amines, also have been used. Currently, a series of innovative agentsmost notably the selective serotonin-reuptake inhibitors (see Chapter 11: 5Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists)dominate the treatment of depressive disorders and are widely used to treat severe anxiety disorders.

In addition to the widespread use of antidepressants, the pharmacological treatment of anxiety disorders commonly employs benzodiazepine sedativeantianxiety agents, which facilitate neuronal hyperpolarization through the gamma-aminobutyric acid (GABA)-receptorCl-channel macromolecular complex. Potent benzodiazepines are effective in panic disorder as well as in generalized anxiety disorder. Their long-term risk:benefit ratio remains controversial. Serotonin 5HT1Areceptor partial agonists such as buspirone also have useful anxiolytic and other psychotropic activity and less likelihood of inducing sedation or dependence. Specialized uses of antidepressants discussed in this chapter include the treatment of anxiety disorders, including obsessive-compulsive disorder, panic-agoraphobia, and social phobias.

Introduction: Psychopharmacology The use of drugs with demonstrated efficacy in psychiatric disorders has become widespread since the mid-1950s. Today, about 10% to 15% of prescriptions written in the United States are for medications intended to affect mental processes: to sedate, stimulate, or otherwise change mood, thinking, or behavior. This practice reflects both the high frequency of primary psychiatric disorders and the nearly inevitable emotional reactions of persons with medical illnesses. In addition, many drugs used for other purposes also modify emotions and cognition, either as part of their usual actions or as toxic effects of overdosage (see especially Chapter 24: Drug Addiction and Drug Abuse). This and the following chapter discuss psychotropic agents used primarily for the treatment of psychiatric disorders. The study of the chemistry, disposition, actions, and clinical pharmacology of such drugs has led to development of the specialty psychopharmacology. Psychotropic agents can be placed into four major categories. Antianxiety-sedative agents, particularly the benzodiazepines, are those used for the drug therapy of anxiety disorders; their pharmacology is reviewed in Chapter 17: Hypnotics and Sedatives. Antidepressants (moodelevating agents) and antimanic or mood-stabilizing drugs (notably, lithium salts and certain anticonvulsants; seeChapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) are those used to treat affective or mood disorders and related conditions. Antipsychotic or neuroleptic drugs (seeChapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) are those used to treat very severe psychiatric illnessesthe psychoses and mania; they have beneficial effects on mood and thought, but many standard neuroleptic agents carry the risk of producing characteristic side effects that mimic neurological diseases, whereas modern antipsychotics are associated with weight gain and adverse metabolic effects such as diabetes. The use of drugs in the treatment of psychiatric disorders is becoming more precise as psychiatric diagnoses continue to gain objectivity, coherence, and reliability. Searches for biological bases of psychiatric illnesses have been stimulated by knowledge of the mechanisms of action of psychotropic agents and the emergence of a medical discipline commonly known as biological psychiatry (Baldessarini, 2000). The diagnostic terminology and criteria for psychiatric disorders currently employed in the United States are well described in the Diagnostic and Statistical Manual of Mental Disorders of the American Psychiatric Association (2000), and updated reviews of psychiatric science are provided in Sadock and Sadock (2000). History Modification of behavior, mood, and emotion by drugs always has been a favorite practice of human beings. The use of psychoactive drugs evolved along two related paths: the use of substances to modify normal behavior and to produce altered states of feeling for religious, ceremonial, or recreational purposes, and their use to alleviate mental ailments. Fascinating accounts of the early

history and characteristics of many psychoactive compounds, particularly those derived from natural products, are presented by Lewin (1931) and Efron and associates (1967) (seeAyd and Blackwell, 1970; Baldessarini, 1985; Caldwell, 1978). In 1845, Moreau proposed that hashish intoxication provided a model psychosis useful in the study of insanity. Three decades later, Freud presented his study of cocaine and suggested its potential uses in pharmacotherapy. Soon thereafter, Kraepelin founded the first laboratory of clinical psychopharmacology in Germany and evaluated psychological effects of drugs in human beings. In 1931, Sen and Bose published the first report of the use of Rauwolfia serpentina in the treatment of insanity. Insulin shock, pentylenetetrazoleinduced convulsions, and electroconvulsive therapy (ECT) followed in 1933, 1934, and 1937, respectively. Treatments for both severe depression and schizophrenia thus became available. Amphetamine (a congener of ephedrine, an active component of the Chinese herbal agent ma huang) was the first synthetic drug to provide a model psychosis. In 1943, Hofmann ingested a minute amount of the ergot derivative lysergic acid diethylamide (LSD) and experienced its hallucinogenic effects. His report of the high potency of LSD popularized the concept that a toxic substance or product of metabolism might be a cause of mental illness. The first modern report on the treatment of psychotic excitement or mania with lithium salts was that of Cade in 1949 (seeChapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). Because of concerns about the toxicity of lithium, this discovery was slow in gaining general acceptance by the medical community. In 1950, chlorpromazine was synthesized in France. Recognition of the unique effects of chlorpromazine by Laborit and colleagues and its use in psychiatric patients in 1952 by Delay and Deniker marked the beginnings of modern psychopharmacology (seeAyd and Blackwell, 1970; Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). A report on meprobamate by Berger (1954) marked the beginning of investigations of modern sedatives with useful antianxiety properties. An antitubercular drug, iproniazid, was introduced in the early 1950s and soon recognized as a monoamine oxidase inhibitor and antidepressant (Kline, 1958); in 1958, Kuhn recognized the antidepressant effect of imipramine. The first of the antianxiety benzodiazepines, chlordiazepoxide, was developed by Sternbach in 1957 (seeChapter 17: Hypnotics and Sedatives). In the following year, Janssen discovered the antipsychotic properties of haloperidol, a butyrophenone (seeChapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania), and thus still another class of antipsychotic agents became available. During the 1960s, the expansion of psychopharmacological research was rapid, and many new theories of psychoactive drug effects were introduced. The clinical efficacy of many of these agents was firmly established during that decade. For many years, the role of biogenic amines and their receptors in the central nervous system (CNS) in mediating effects of psychotropic drugs has been emphasized and has stimulated searches for the causes of mental illness (seeBaldessarini, 2000). In addition, increasing attention has been paid to the liabilities of treatment with psychotherapeutic drugs, especially their limited efficacy in severe or chronic mental illnesses, their risk of sometimes serious adverse effects, and the limitations, conservatism, and basic circularity of screening and testing methods used to develop new agents (seeBaldessarini, 2000). The antipsychotic, mood-stabilizing, and antidepressant agents used to treat the most severe mental illnesses have had a remarkable impact on psychiatric practice and theory an impact that legitimately can be called revolutionary and one that is experiencing continued innovation. Nosology The different classes of psychotropic agents are selective in their ability to modify symptoms of

mental illnesses. The optimal use of such drugs thus requires familiarity with the differential diagnosis of psychiatric conditions (seeSadock and Sadock, 2000; American Psychiatric Association, 2000). A few salient aspects of psychiatric classification are summarized briefly here, and additional information is provided in the discussion of specific classes of drugs. Basic distinctions are made among the cognitive disorders, psychotic disorders, mood disorders, anxiety disorders, and disorders of personality. The cognitive disorder syndromes of delirium and dementia commonly are associated with definable neuropathological, metabolic, or toxic (including drug-induced) changes and are characterized by confusion, disorientation, and memory disturbances as well as behavioral disorganization. In general, the effectiveness of pharmacological treatment of the core cognitive impairment in the dementias remains limited, despite extensive efforts to develop effective treatments. These have included use of stimulants, so-called notropics (e.g., periacetam), cholinesterase inhibitors, putative cerebral vasodilators (e.g., ergot alkaloids, papaverine, isoxuprine), and the calcium channel blockers, such as nimodipine (seeChapters 32: Drugs Used for the Treatment of Myocardial Ischemia and 33: Antihypertensive Agents and the Drug Therapy of Hypertension; Knapp et al., 1994; Marin and Davis, 1998). This topic is not specifically covered in this chapter. The psychoses are among the most severe psychiatric disorders, in which there is not only marked impairment of behavior but also a serious inability to think coherently, to comprehend reality, or to gain insight into the presence of these abnormalities. These common disorders (affecting perhaps 0.5% to 1.0% of the population at some age) typically include symptoms of false beliefs (delusions) and abnormal sensations (hallucinations). The psychotic disorders are suspected of having a neurobiological basis but usually are distinguished from the cognitive disorders. The etiological basis of the psychotic disorders remains unknown, although genetic and neurodevelopmental as well as environmental causative factors have been proposed. Representative syndromes in this category include schizophrenia, brief psychoses, and delusional disorders, although psychotic features also are not uncommon in the major mood disorders, particularly mania and severe melancholic depression. Psychotic illnesses are characterized by disorders of thinking processes, as inferred from illogical or highly idiosyncratic communications, with disorganized or irrational behavior and varying degrees of altered mood that can range from excited agitation to severe emotional withdrawal. Idiopathic psychoses characterized mainly by chronically disordered thinking and emotional withdrawal and often associated with delusions and auditory hallucinations are called schizophrenia. Acute or recurrent idiopathic psychoses also occur that bear an uncertain relationship to schizophrenia or the major affective disorders. In addition, more or less isolated delusions can arise in delusional disorder or paranoia. Antipsychotic drugs exert beneficial effects in many types of psychotic illness and are not selective for schizophrenia. Their beneficial actions are found in disorders ranging from postsurgical delirium and amphetamine intoxication to paranoia, mania, and psychotic depression, and they can be beneficial against the agitation of Alzheimer's dementia. They are especially beneficial in severe depression and possibly other conditions marked by severe turmoil or agitation. This class of agents is discussed in Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania. The major disorders of mood or affect include the syndromes of major depression (formerly including melancholia) and bipolar disorder (formerly manic-depressive disorder). These disorders are quite prevalent, affecting several percent of the population at some time. They commonly include disordered autonomic functioning (e.g., altered activity rhythms, sleep, and appetite) and behavior, as well as persistent abnormalities of mood. These disorders parallel an increased risk of self-harm or suicide as well as increased mortality from stress-related general medical conditions, medical complications of commonly comorbid abuse of alcohol or drugs, or from accidents. Bipolar

disorder is marked by a high likelihood of recurrences of severe depression and manic excitement, often with psychotic features. Major depression is usually treated with a variety of agents generally considered to be antidepressants, which have beneficial effects on the symptoms of major depression as well as on those of anxiety disorders. They are discussed further in this chapter. Bipolar disorder usually is treated primarily with lithium, certain anticonvulsants, or other agents with mood-stabilizing effects, as discussed in Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania. The less pervasive psychiatric disorders include conditions formerly termed psychoneuroses, which currently are viewed as anxiety-associated disorders. Whereas the ability to comprehend reality is retained, suffering and disability sometimes are severe. Anxiety disorders may be acute and transient or, commonly, recurrent or persistent. Their symptoms may include mood changes (fear, panic, dysphoria) or limited abnormalities of thought (obsessions, irrational fears or phobias) or of behavior (avoidance, rituals or compulsions, pseudoneurological or "hysterical" conversion signs, or fixation on imagined or exaggerated physical symptoms). In such disorders, drugs may have some beneficial effects, particularly by modifying associated anxiety and depression and so facilitating a more comprehensive program of treatment and rehabilitation. Currently, antidepressants as well as sedative-antianxiety agents commonly are used to treat anxiety disorders, which are considered later in this chapter. Other typically lifelong conditionsincluding the personality disorders, substance-use disorders, and hypochondriasismay or may not respond appreciably to pharmacological intervention. Personality disorders have prominent avoidant, antisocial, paranoid, withdrawn, dependent, or unstable characteristics. Other disorders involve patterns of behavior (e.g., abuse of alcohol or other substances, deviant eating, exaggerated somatic preoccupations, or other abnormal behaviors). Typically, psychotropic drugs alone are not effective in such long-term conditions except when anxiety or depression occur. Pharmacological treatment also is an important component of the medical management of withdrawal from addicting substances and in supporting their avoidance (seeChapter 24: Drug Addiction and Drug Abuse; Cornish et al., 1998). Biological Hypotheses in Mental Illness The introduction in the 1950s of relatively effective and selective drugs for the management of schizophrenic and manic-depressive patients encouraged formulation of biological concepts of the pathogenesis of these major mental illnesses. In addition, other agents were discovered that mimic some of the symptoms of severe mental illnesses. These include LSD, which induces hallucinations and altered emotional states; antihypertensive agents such as reserpine, which can induce depression; and stimulants that can induce manic or psychotic states when taken in excess. A leading hypothesis that arose from such considerations was based on observations indicating that antidepressants enhance the biological activity of monoamine neurotransmitters in the CNS and that antiadrenergic compounds may induce depression. These observations led to speculation that a deficiency of aminergic transmission in the CNS might be causative of depression or that an excess could result in mania. Further, since antipsychotic agents antagonize the actions of dopamine as a neurotransmitter in the forebrain, it was proposed that there may be a state of functional overactivity of dopamine in the limbic system or cerebral cortex in schizophrenia or mania. Alternatively, an endogenous psychotomimetic compound might be produced either uniquely or in excessive quantities in psychotic patients. This "pharmacocentric" approach to the construction of hypotheses was appealing and gained

strong encouragement from studies of the actions of antipsychotic and antidepressant drugs while also encouraging further development of similar agents. In turn, the plausibility of such biological hypotheses has encouraged interest in genetic studies as well as in clinical biochemical studies. Despite extensive efforts, attempts to document metabolic changes in human subjects predicted by these hypotheses have not, on balance, provided consistent or compelling corroboration (Baldessarini, 2000; Bloom and Kupfer, 1995; Musselman et al., 1998). Moreover, results of genetic studies have provided evidence that inheritance can account for only a portion of the causation of mental illnesses, leaving room for environmental and psychological hypotheses. The antipsychotic, antianxiety, antimanic, and antidepressant drugs have effects on cortical, limbic, hypothalamic, and brainstem mechanisms that are of fundamental importance in the regulation of arousal, consciousness, affect, and autonomic functions. It is quite possible that physiological and pharmacological modification of these brain regions have important behavioral consequences and useful clinical effects regardless of the fundamental nature or cause of the mental disorder in question. The lack of symptomatic or even syndromal specificity of most psychotropic drugs tends to reduce the chances of finding a discrete metabolic correlate for a specific disease conceived simply on the actions of therapeutic agents. Finally, the technical problems associated with attempts to study changes in the in vivo metabolism or the postmortem chemistry of the human brain are formidable. Among these are artifacts introduced by drug treatment itself. In summary, the available information does not permit a conclusion as to whether or not discrete biological lesions are the crucial basis of the most severe mental illnesses (other than the deliria and dementias). Moreover, it is not necessary to presume that such a basis is operative to provide effective medical treatment for psychiatric patients. Furthermore, it would be clinical folly to underestimate the importance of psychological and social factors in the manifestations of mental illnesses or to overlook psychological aspects of the conduct of biological therapies (Baldessarini, 2000). Identification and Evaluation of Psychotropic Drugs Although rational, predictive development and assessment of the efficacy of any drug is problematic, the difficulties in evaluating psychoactive drugs are particularly challenging. The essential characteristics of human mental disorders cannot be reproduced in animals. Cognition, communication, and social relationships in animals are difficult to compare with human conditions. Thus, screening procedures in animals are of limited utility for the discovery of unique therapeutic agents. Contemporary pharmacology has provided many techniques for characterizing the actions of known psychotropic and other CNS agents at the cellular and molecular levels. Characteristics such as affinity for specific receptors or transporters can lead to the identification of new agents. Further innovation has been emerging slowly from the rapid recent progress in identifying novel subtypes of classical neurotransmitter receptors, effectors, and many other macromolecular target sites in brain tissue for potential new drugs (Baldessarini, 2000). In addition, clinical evaluation of new drugs is hampered by the lack of homogeneity within diagnostic groups and difficulty in application of valid, sensitive measurements of the effects of therapy. As a consequence, the results of clinical trials of psychotropic agents sometimes seem equivocal or inconsistent. Treatment of Depressive and Anxiety Disorders Major depression is characterized by clinically significant depression of mood and impairment of functioning as its primary clinical manifestations. Its clinical manifestations and current treatment overlap the anxiety disorders, including panic-agoraphobia syndrome, severe phobias, generalized anxiety disorder, social anxiety disorder, posttraumatic stress disorder, and obsessive-compulsive

disorder. Extremes of mood may be associated with psychosis, manifested as disordered or delusional thinking and perceptions, often congruent with the predominant mood. Conversely, psychotic disorders may have associated or secondary changes in mood. This overlap of disorders may lead to errors in diagnosis and clinical management (American Psychiatric Association, 2000). Each with a lifetime morbid risk of perhaps 10% in the general population, major mood and anxiety disorders are the most common mental illnesses (Kessler et al., 1994). Clinical depression is distinguished from normal grief, sadness, disappointment, and the dysphoria or demoralization often associated with medical illness. The condition is underdiagnosed and frequently undertreated (McCombs et al., 1990; Suominen et al., 1998). Major depression is characterized by feelings of intense sadness and despair, mental slowing and loss of concentration, pessimistic worry, lack of pleasure, self-deprecation, and variable agitation. Physical changes also occur, particularly in severe, vital, or "melancholic" depression. These include insomnia or hypersomnia; altered eating patterns, with anorexia and weight loss or sometimes overeating; decreased energy and libido; and disruption of the normal circadian and ultradian rhythms of activity, body temperature, and many endocrine functions. As many as 10% to 15% of individuals with this disorder, and up to 25% of those with bipolar disorder, display suicidal behavior during their lifetime (Baldessarini and Jamison, 1999). Depressed patients usually respond to antidepressant drugs or, in severe or treatment-resistant cases, to ECT (seeRudorfer et al., 1997). The decision to treat with an antidepressant is guided by the presenting clinical syndrome and its severity and by the patient's personal and family history. Most antidepressants exert important actions on the metabolism of monoamine neurotransmitters and their receptors, particularly norepinephrine and serotonin (Buckley and Waddington, 2000; Owens et al., 1997). Their therapeutic effectiveness and actions, together with strong evidence for genetic predisposition, have led to speculation that the biological basis of major mood disorders may include abnormal function of monoamine neurotransmission. However, the direct evidence for this view is limited and inconsistent (seeBaldessarini, 2000; Bloom and Kupfer, 1995; Heninger and Charney, 1987; Musselman et al., 1998). Diagnosis and treatment of the severe anxiety disorders has advanced recently, stimulated by the discovery that selective serotonin-reuptake inhibitors, which are effective antidepressants, also are powerful antianxiety agents. Disorders including panic-agoraphobia, social and other phobias, generalized anxiety, and obsessive-compulsive disorder as well as apparently related disorders of impulse control all appear to be responsive to treatment with serotonin-reuptake inhibitors (Taylor, 1998). Benzodiazepines, azapirones, and other sedative-anxiolytic drugs also are employed in anxiety disorders. Their pharmacology is discussed in Chapter 17: Hypnotics and Sedatives. Mania and the alternation or admixture of mania and depression (bipolar disorder) are less common than nonbipolar major depression. Mania and its milder form (hypomania) are treated with antipsychotic drugs, anticonvulsants, or lithium salts, sometimes supplemented with a potent sedative in the short term and lithium salts or certain anticonvulsants with mood-stabilizing properties (seeChapters 17: Hypnotics and Sedatives and 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) for longer-term prevention of recurrences. Mania is characterized by excessive elation, typically tinged with dysphoria or marked by irritability, severe insomnia, hyperactivity, uncontrollable speech and activity, impaired judgment, and risky behaviors, and sometimes by psychotic features. The selection and administration of appropriate treatment for depression and anxiety disorders are discussed below. Antidepressants Imipramine, amitriptyline, their N-demethyl derivatives, and other similar compounds were the first successful antidepressants and, since the early 1960s, have been widely used for the treatment of major depression. Because of their structures (seeTable 191), these agents often are referred to as

the "tricyclic" antidepressants (Frazer, 1997). Their efficacy in alleviating major depression is well established, and they also have proved useful in a number of other psychiatric disorders. Just prior to the discovery of the antidepressant properties of imipramine in the late 1950s, the ability of monoamine oxidase (MAO) inhibitors to cause mania was noted, and during the early 1960s, both types of agents were studied intensively in the treatment of clinical depression. Early MAO inhibitors appeared to be limited in efficacy at the doses used and presented both toxic risks and potentially dangerous interactions with other agents, thus limiting their acceptance in favor of the tricyclic agents. After decades of limited progress, a series of innovative antidepressants has emerged. Mostlike citalopram, fluoxetine, fluvoxamine, paroxetine, sertraline, and venlafaxineare inhibitors of the active reuptake (transport) of serotonin (5-hydroxytryptamine, 5-HT) into nerve terminals (seeChapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists). Others including bupropion, nefazodone, and mirtazapinehave a less well defined neuropharmacology and can be considered "atypical." Whereas the efficacy of the newer agents is not superior to that of the older agents, their relative safety and tolerability has led to their rapid acceptance as the most commonly prescribed antidepressants. History Monoamine Oxidase Inhibitors In 1951, isoniazid and its isopropyl derivative, iproniazid, were developed for the treatment of tuberculosis. Iproniazid had mood-elevating effects in tuberculosis patients. In 1952, Zeller and coworkers found that iproniazid, in contrast to isoniazid, was capable of inhibiting the enzyme MAO. Following investigations by Kline and by Crane in the mid-1950s, iproniazid was used for the treatment of depressed patients; historically, it is the first clinically successful modern antidepressant (Healy, 1997). Tricyclic Antidepressants Hfliger and Schindler in the late 1940s synthesized a series of more than 40 iminodibenzyl derivatives for possible use as antihistamines, sedatives, analgesics, and antiparkinsonism drugs. One of these was imipramine, a dibenzazepine compound, which differs from the phenothiazines only by replacement of the sulfur with an ethylene bridge to produce a seven-membered central ring analogous to the benzazepine antipsychotic agents (seeChapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). Following screening in animals, a few compounds, including imipramine, were selected on the basis of sedative or hypnotic properties for therapeutic trial. During clinical investigation of these putative phenothiazine analogs, Kuhn (1958) fortuitously found that, unlike the phenothiazines, imipramine was relatively ineffective in quieting agitated psychotic patients, but it had a remarkable effect on depressed patients; indisputable evidence of its effectiveness in these patients has accumulated (seeBaldessarini, 1989; Hollister, 1978; Potter et al., 1998; Thase and Nolen, 2000). Older tricyclic antidepressants with a tertiary-amine side chain (including amitriptyline, doxepin, and imipramine) block neuronal uptake of both serotonin and norepinephrine, and clomipramine is relatively selective against serotonin (seeTable 192). Following this lead, even more selective serotonin-reuptake inhibitors were developed in the early 1970s, arising from observations by Carlsson that antihistamines including chlorpheniramine and diphenhydramine inhibited the transport of serotonin or norepinephrine. Chemical modifications led to the earliest selective serotonin-reuptake inhibitor, zimelidine, soon followed by development of fluoxetine and

fluvoxamine (Carlsson and Wong, 1997; Fuller, 1992; Masand and Gupta, 1999; Tollefson and Rosenbaum, 1998; Wong and Bymaster, 1995). Zimelidine was first in clinical use, but withdrawn due to association with febrile illnesses and cases of Guillain-Barr ascending paralysis, leaving fluoxetine and fluvoxamine as the first widely used selective serotonin-reuptake inhibitors (dubbed SSRIs). The development of these agents was paralleled by identification of compounds with selectivity for norepinephrine reuptake and others effective against both serotonin and norepinephrine reuptake (see"Prospectus," below). Chemistry and Structure-Activity Relationships Tricyclic Antidepressants The search for compounds related chemically to imipramine yielded multiple analogs. In addition to the dibenzazepines, imipramine and its secondary-amine congener (and major metabolite) desipramine, as well as its 3-chloro derivative clomipramine, there are amitriptyline and its Ndemethylated metabolite nortriptyline (dibenzocycloheptadienes), as well as doxepin (a dibenzoxepine) and protriptyline (a dibenzocycloheptatriene). Other structurally related agents are trimipramine (a dibenzazepine, with only weak effects on amine transport); maprotiline (containing an additional ethylene bridge across the central six-carbon ring); and amoxapine (a piperazinyldibenzoxazepine with mixed antidepressant and neuroleptic properties). Since these agents all have a three-ring molecular core and most share pharmacological (norepinephrinereuptake inhibition) and clinical (antidepressant, anxiolytic) properties, the trivial name "tricyclic antidepressants" can be used for this group. Structures and other features of antidepressant compounds are given in Table 191. Selective Serotonin-Reuptake Inhibitors Most are aryl or aryloxyalkylamines. Several (including citalopram, fluoxetine, and zimelidine) are racemates; sertraline and paroxetine are separate enantiomers. The (S)-enantiomers of citalopram and of fluoxetine and its major metabolite norfluoxetine are highly active against serotonin transport and also may have antimigraine effects not found with the (R)-enantiomer of fluoxetine. The (R)-enantiomer of fluoxetine also is active against serotonin transport and is shorter-acting than the (S)-enantiomer. (R)-Norfluoxetine is virtually inactive (Wong et al., 1993). Structure-activity relationships are not well established for serotonin-reuptake inhibitors. However, it is known that the para-location of the CF 3 substituent of fluoxetine (seeTable 191) is critical for serotonin transporter potency. Its removal and substitution at the ortho-position of a methoxy group yields nisoxetine, a highly selective norepinephrine-uptake inhibitor. Monoamine Oxidase Inhibitors The first MAO inhibitors to be used in the treatment of depression were derivatives of hydrazine, a highly hepatotoxic substance. Phenelzine is the hydrazine analog of phenethylamine, a substrate for MAO; isocarboxazide is a hydrazide derivative that probably is converted to the corresponding hydrazine to produce long-lasting inhibition of MAO. Subsequently, compounds unrelated to hydrazine were found to be potent MAO inhibitors. Several of these agents were structurally related to amphetamine and were synthesized in an attempt to enhance central stimulant properties. Cyclization of the side chain of amphetamine resulted in tranylcypromine, which also produces long-acting inhibition of MAO without covalent bonding. Selegiline and several experimental MAO inhibitors are propargylamines containing a reactive acetylenic bond that interacts irreversibly with the flavin cofactor of MAO (Cesura and Pletscher, 1992). Short-acting, reversible MAO inhibitors include brofaromine (a piperidylbenzofuran), moclobemide (a morpholinobenzamide), and

toloxatone (an oxazolidinone). Moclobemide has at least moderate antidepressant activity (LotufoNeto et al., 1999). Pharmacological Properties: Central Nervous System Tricyclic Antidepressants and Other Norepinephrine-Reuptake Inhibitors Knowledge of the pharmacological properties of antidepressant drugs remains incomplete, and its coherent interpretation is limited by a lack of a compelling psychobiological theory of mood disorders. The actions of imipramine-like tricyclic antidepressants include a range of complex, secondary adaptations to their initial actions as inhibitors of neuronal transport (reuptake) of norepinephrine and variable blockade of serotonin transport (seeTable 192; Barker and Blakely, 1995; Beasley et al., 1992; Heninger and Charney, 1987; Leonard and Richelson, 2000; Potter et al., 1998; Wamsley et al., 1987). Tricyclic type antidepressants with secondary amine side chains or the N-demethylated (nor) metabolites of agents with tertiary-amine moieties (e.g., amoxapine, desipramine, maprotiline, norclomipramine, nordoxepin, nortriptyline) are relatively selective inhibitors of norepinephrine transport. Most tertiary-amine tricyclic antidepressants also inhibit the uptake-inactivation of serotonin. The amine transportinhibiting effects of antidepressants occur immediately and are sustained indefinitely. It is likely that selective inhibitors of norepinephrine reuptake, including reboxetine, share many of the actions of older norepinephrine-transport inhibitors like desipramine (Delgado and Michaels, 1999). Among the tricyclic antidepressants, trimipramine is exceptional in that it lacks prominent inhibitory effects at monoamine transport (seeTable 192), and its actions remain unexplained. The tricyclic and other norepinephrine-active antidepressants do not block dopamine transport (seeTable 192) and in that way differ from CNS stimulants, including cocaine, methylphenidate, and the amphetamines (seeChapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists). Nevertheless, they may have indirect dopamine-facilitating effects through interactions of increased perisynaptic abundance of norepinephrine, particularly in cerebral cortex, where adrenergic terminals exceed those releasing dopamine. Tricyclic antidepressants also can desensitize D2 dopamine autoreceptors, perhaps indirectly enhancing forebrain dopaminergic mechanisms, and so contribute to elevation of mood and behavioral activity (Potter et al., 1998). In addition to their transport-inhibiting effects, tricyclic antidepressants have variable interactions with adrenergic receptors (seeTable 193). The presence or absence of such receptor interactions appears to be critical for subsequent responses to increased availability of extracellular norepinephrine in or near synapses. Most tricyclic antidepressants have at least moderate and selective affinity for 1-adrenergic receptors, much less for 2, and virtually none for receptors. The 2 receptors include presynaptic autoreceptors that limit the neurophysiological activity of noradrenergic neurons ascending from the locus ceruleus in brainstem to supply mid- and forebrain projections, as well as descending projections to the spinal cord cholinergic preganglionic efferents to the peripheral autonomic ganglia (seeChapters 6: Neurotransmission: The Autonomic and Somatic Motor Nervous Systems and 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists). Autoreceptor mechanisms also reduce the synthesis of norepinephrine through the rate-limiting step at tyrosine hydroxylase, presumably through 2adrenergic receptor attenuation of cyclic AMPmediated phosphorylation. Activation of these receptors inhibits transmitter release by incompletely defined molecular and cellular actions, but likely including suppression of voltage-gated Ca2+ currents and activation of G proteincoupled, receptor-operated K+ currents (Foote and Aston-Jones, 1995).

The 2-receptormediated negative-feedback mechanisms are rapidly activated on administration of tricyclic antidepressants. By limiting synaptic availability of norepinephrine, tricyclic antidepressants tend to maintain functional homeostasis. However, with repeated drug exposure, 2receptor responses are eventually diminished. This loss may result from desensitization secondary to increased exposure to the endogenous agonist ligand norepinephrine or, alternatively, from prolonged occupation of the norepinephrine transporter itself via an allosteric effect, as suggested for inhibitors of serotonin transporters on serotonergic neurons (Chaput et al., 1991). Over a period of days to weeks, this adaptation allows the presynaptic production and release of norepinephrine to return to, or even exceed, baseline levels (Baldessarini, 1989; Heninger and Charney, 1987; Foote and Aston-Jones, 1995; Potter et al., 1998). However, long-term treatment eventually can reduce the expression of tyrosine hydroxylase (Nestler et al., 1990). Postsynaptic -adrenergic receptors also gradually down-regulate in functional receptor density over several weeks. This adaptive response accompanies repeated treatment with various types of antidepressants, including tricyclics, some serotonin-reuptake inhibitors, MAO inhibitors, and electroshock treatment in animals (Sulser and Mobley, 1980). Combinations of a serotonin transport inhibitor with a tricyclic antidepressant may have a more rapid -adrenergic receptordesensitizing effect. The pharmacodynamic or pharmacokinetic basis of this interaction is not clear, nor are its contributions to superior clinical efficacy proven (Nelson et al., 1991). It is unlikely that loss of receptor functioning contributes directly to the mood-elevating effects of antidepressant treatment, since blockers tend to induce or worsen depression in vulnerable persons. Nevertheless, loss of inhibitory -adrenergic influences on serotonergic neurons may enhance release of serotonin and thus contribute indirectly to antidepressant effects (Leonard and Richelson, 2000; Wamsley et al., 1987; seeChapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists). Postsynaptic 1-adrenergic receptors may be partially blocked initially, probably contributing to early hypotensive effects of many tricyclic antidepressants. Over weeks of treatment they remain available and may even become more sensitive to norepinephrine, as mood-elevating effects gradually emerge clinically. At the time of clinical efficacy, therefore, inactivation of transmitter reuptake continues to be blocked; presynaptic production and release of norepinephrine has returned to or may exceed baseline levels; and a postsynaptic 1-adrenergic mechanism is in place to provide a functional output believed to contribute to antidepressant activity (Baldessarini, 1989). Additional neuropharmacological changes that may contribute to the clinical effects of tricyclic antidepressants include indirect facilitation of serotonin and perhaps dopamine neurotransmission through excitatory 1 receptors on other monoaminergic neurons, or desensitized, inhibitory 2 receptors, as well as D2dopamine autoreceptors. Activated release of serotonin and dopamine may, in turn, lead to secondary down-regulation of serotonin 5-HT1 autoreceptors, postsynaptic 5-HT2 receptors, and perhaps dopamine D2 autoreceptors and postsynaptic D2 receptors (Leonard and Richelson, 2000). Other adaptive changes have been observed in response to long-term treatment with tricyclic antidepressants. These include altered sensitivity of muscarinic acetylcholine receptors as well as decreases of GABAB gamma-aminobutyric acid receptors and possibly also NMDA glutamate receptors (Kitamura et al., 1991; Leonard and Richelson, 2000). In addition, there is a net gain in cyclic AMP production and altered activity of protein kinases in some cells, including those acting on cytoskeletal and other structural proteins that may alter neuronal growth and sprouting (Racagni et al., 1991; Wong et al., 1991). Nuclear genetic-regulatory factors also are affected, including the cyclic AMPresponse-element binding protein (CREB) and brain-derived neurotrophic factor (BDNF) (Duman et al., 1997; Siuciak et al., 1997). Additional changes may be indirect effects of

antidepressant treatment or may reflect recovery from depressive illness. These include normalization of corticosteroid release and the sensitivity of corticosteroid receptors, as well as shifts in the production of prostaglandins and cytokines and in lymphocyte functions (Kitayama et al., 1988; Leonard and Richelson, 2000). Understanding of the physiological and psychobiological implications of these many molecular and cellular changes during repeated antidepressant treatment remains incomplete. Nevertheless, their occurrence underscores the important concept that repeated administration of neuroactive or psychotropic agents sets off a complex series of adaptive processes. Specifically regarding the tricyclic antidepressants, their neuropharmacology is not accounted for simply by blocking the transport-mediated removal of norepinephrine, even though this effect is no doubt a crucial initiating event leading to a cascade of important secondary adaptations (Duman et al., 1997; Hyman and Nestler, 1996; Leonard and Richelson, 2000). Interactions of antidepressants with monoaminergic synaptic transmission are illustrated in Figure 191.

Figure 191. Sites of Action of Antidepressants. A. In varicosities ("terminals") along terminal arborizations of norepinephrine (NE) neurons projecting from brainstem to forebrain, tyrosine is oxidized to dihydroxyphenylalanine (DOPA)

Prospectus Major affective and anxiety disorders continue to represent the most common psychiatric illnesses; they include the most prevalent disorders of unknown cause with psychotic features and represent enormous costs to society in morbidity, disability, and premature mortality (Kessler et al., 1994). Rates of diagnosis and appropriate treatment of major mood disorders have improved somewhat in recent years with the advent of better accepted modern mood-altering medicines. Nevertheless, the majority of patients with depression and bipolar disorder are diagnosed after years of delay, if at all, and many remain inadequately treated (McCombs et al., 1990; Newman and Hassan, 1999). Given these unmet needs, the clinical needs and economic incentives for developing additional, improved mood-altering medicines are clear. Several groups of depressed patients continue to be particularly inadequately treated or studied. They include children and the elderly, those with bipolar depression, and those with severe, chronic, or psychotic forms of depression (seeShulman et al., 1996; Kutcher, 1997). Whereas ambulatory depressed patients are much greater in number, have the highest likelihood of improvement and recovery, and represent the largest potential market, they also are most likely to respond to a placebo or other nonspecific treatment and thus represent a special challenge for the development of clinically useful and cost-effective treatments. A major limitation of efforts to develop new mood-altering agents is the lack of compelling rationales other than imitation or modification of successful precedents. The fundamental problem is the continued lack of a coherent pathophysiology, let alone an etiology, of major depression, bipolar disorder and the common anxiety disorders despite decades of important and useful contributions to the description of the syndromes. Major depression may well represent a spectrum of disorders, varying in severity from relatively mild and self-limited disorders that approach everyday human distress to extraordinarily severe, psychotic, incapacitating, and deadly illnesses. It remains difficult to conceive of a mood-altering agent that does not affect central monoaminergic synaptic neurotransmission, particularly that mediated by either norepinephrine or serotonin, which limits the identification of novel therapeutic targets for these disorders (Murphy et al., 1995; Bloom and Kupfer, 1995; Healy, 1997). Novel Treatments for Major Depression and Anxiety Disorders Many of the large number of potential antidepressants in development continue to exploit interactions with either noradrenergic or serotonergic systems in proven ways (Evrard and Harrison, 1999; Kent, 2000). These agents conceptually are remarkably similar to the tricyclic-type antidepressants and include several relatively selective inhibitors of the neuronal transport of norepinephrine (e.g., oxaprotiline, levoprotiline, lofepramine, nisoxetine, reboxetine, (R)thionisoxetine, tomoxetine, and viloxazine) that are less cardiotoxic or lethal on overdose than traditional tricyclic antidepressants. Development of additional serotonin-reuptake inhibitors has slowed, although their range of approved indications continues to expand, particularly into the anxiety, impulsive or compulsive, and eating disorders. However, several novel serotonin-reuptake inhibitors including Ro-15-8081 and aryl- or naphthylpiperidines and thiodiphenyls have been developed; tianeptine has complex effects on the storage and release of serotonin and may facilitate its uptake in vivo. Also, some substituted phenyltropane analogs of cocaine are serotonin transporterselective and much less potent at dopamine transporters than are cocaine and older phenyltropanes (Robertson, 1999). Some

of these agents have been evaluated as potential antidepressants in clinical trials (seeLeonard, 1994; Murphy et al., 1995). The clinical efficacy of the mixed serotonin/norepinephrine transport antagonist, venlafaxine, and the interesting beneficial properties of an older, similar agent, clomipramine (see above) have encouraged further exploration of the principle of mixed aminergic potentiation (Kent, 2000). This strategy has led to such drugs as duloxetine (LY-248686), milnacipran, and analogs of bupropion. These developments arose conceptually from pursuing the transport-blocking activities of the original tricyclic antidepressants, with efforts to avoid their familiar toxic properties (including potent antimuscarinic and cardiac depressant activities). Surprisingly underexplored are agents with dopamine-potentiating activity (Murphy et al., 1995). Nomifensine is a mixed antagonist of norepinephrine and dopamine transporters (seeTable 192; Zahniser et al., 1999). An effective antidepressant, it was withdrawn from clinical use due to association with febrile illnesses and ascending paralysis of the Guillain-Barr type. The stimulantantidepressant bupropion also has mixed antagonistic effects on the same amine transporters. Other dopamine-receptor agonists also may have antidepressant effects (Murphy et al., 1995; Mattox et al., 1986; Wells and Marken, 1989; D'Acquila et al., 1994). Additional agents whose actions include inhibition of dopamine transport are the tricyclic-like agent amineptine, medifoxamine, and a series of potent piperazine derivatives (GBR-12909 and others). Antiparkinsonism, directly acting dopamine-receptor agonistsincluding bromocriptine, pramipexole, lisuride, and roxindole (the latter two also serotonergic)also have been reported to have mood-elevating properties (Murphy et al., 1995; seeChapter 22: Treatment of Central Nervous System Degenerative Disorders). Novel approaches to enhancing central adrenergic function include the use of 2-adrenergic receptor antagonists. This is one of several activities of the complex atypical antidepressants mianserin and mirtazapine. The 2-receptor antagonist yohimbine is stimulant-like, whereas other selective 2 antagonists including idazoxan, fluparoxan, R-47,243, and setiptiline have questionable or untested antidepressant activity. Centrally acting -adrenergic receptor agonists including clenbuterol, albuterol, and SR-58611A have not been clinically useful antidepressants, and directacting 1-adrenergic receptor agonists (including adrafinil and modafinil) have had inconsistent effects on depression and may even have deleterious cognitive actions (seeLeonard, 1994; Murphy et al., 1995; Arnsten et al., 1999). Phosphodiesterase inhibitors including rolipram have been considered as potential antidepressants but have dubious clinical effects and an uncertain neuropharmacology aside from the ability to prevent hydrolysis of cyclic AMP (Murphy et al., 1995). Interest in MAO inhibitors has continued despite their risks of dangerous interactions with other substances. Selective irreversible propargyl inhibitors of MAO-A (e.g., clorgyline) with moodelevating activity, as well as inhibitors of MAO-B (e.g., selegiline) with dopamine sparing, antiparkinsonism, and antidepressant activities (at least, at high doses probably not selective for MAO-B), represent potentially important leads to novel psychotropic or neurotropic agents. In addition, several short-acting reversible inhibitors of MAO-A have at least moderate antidepressant activity and limit the risk of inducing acute hypertension by potentiating pressor amines. Such short-acting MAO-A inhibitors include brofaromine, moclobemide (MANERIX), pirlindole, and toloxatone (seeDanish University Antidepressant Group, 1993; Leonard, 1994; Murphy et al., 1995). At least one short-acting inhibitor of MAO-B (Ro-19-6327) also has been described. Minaprine is an experimental antidepressant that appears to enhance dopamine and serotonin neurotransmission, possibly through weak anti-MAO-A actions (Murphy et al., 1995). Another approach has been to develop CNS-selective MAO inhibitors in order to avoid blocking hepatic MAO or potentiating peripheral sympathetic function. A lead compound for a CNS-selective MAO

inhibitor is MDL-72394, a prodrug that evidently is converted by cerebral decarboxylation into an irreversible, central MAO inhibitor (Oxenkrug et al., 1999). The large number of serotonin receptor subtypes provides many opportunities for developing novel agonists, partial agonists, antagonists, and negative antagonists (or inverse agonists), some of which may alter mood or treat anxiety disorders. Pindolol, a mixed -adrenergic, serotonin 5-HT1A somatodendritic-autoreceptor antagonist, has been reported to accelerate or potentiate some serotonin-reuptake inhibitor antidepressants. These observations have stimulated efforts to develop agents with mixed antiserotonin transport and anti-5-HT1Areceptor activity in the same molecule. Several such agents are known, including derivatives of pindolol and the naphthylpyrrolidine EMD95750. An opposite strategy is to evaluate 5-HT1Areceptor agonists as possible mood-altering or antianxiety agents. Such agents may act in part by enhancing release of norepinephrine (Cohen et al., 1999). An example of a 5-HT1A partial agonist with anxiolytic effects is flesinoxan (Albert et al., 1999). Several partial agonists of 5-HT1A receptors have been explored for potential utility both in anxiety disorders and in milder cases of mixed anxiety-depression (Dubovsky and Buzan, 1995; Murphy et al., 1995). Some 5-HT1Areceptor partial agonists that may have antidepressant activity (e.g., gepirone, ipsapirone, and zalospirone) are azapirones related chemically to buspirone. An additional opportunity for modifying serotonin function is to antagonize the 5-HT1A receptor subtypes that serve as autoreceptors; several such antagonists are known, including GR-127935 (Robertson, 1999). Some recently developed antidepressants, notably nefazodone, combine activity as serotonin reuptake inhibitors and 5-HT2A antagonists. Several innovative agents, including YM-35992, have followed this precedent. The 5-HT2C serotonin receptor is prominent in limbic forebrain and cerebral cortex. This receptor subtype has been postulated to be a reasonable therapeutic target for depression or anxiety (Murphy et al., 1995). Nefazodone and the trazodone metabolite m-chlorophenylpiperazine (mCPP), as well as Ro-60-0175, Ro-60-0332, Org-12962, and Org-8484 all have 5-HT2C agonist or partial-agonist properties. Norfluoxetine also interacts potently with 5-HT2C receptors. There also are selective ligands for the 5-HT6 receptor (Ro-63-0563, SB-171046) and emerging compounds for 5-HT7 receptors (Robertson, 1999). However, the potential psychotropic properties of such agents remain obscure, requiring substantial investment in preclinical and exploratory investigations to evaluate their clinical utility in treatment of depression or anxiety. Agents acting at amino-acid neurotransmission systems also provide leads to potential psychotropic drugs. For example, certain analogs of progesterone interact with a distinct allosteric regulatory site in the GABAA-receptor complex to activate hyperpolarizing chloride channels, and so may have anxiolytic properties (Robertson, 1999; Nabeshima and Muraoka, 1999). Agents that interact with N-methyl-D-aspartate (NMDA) glutamate receptors have antidepressantlike activity in some animal behavioral models. They include the NMDA antagonist dizolcipine (MK-801) and NMDA-receptor partial agonist AP-7 (Murphy et al., 1995). Receptors for cerebral peptides also provide targets for psychotropic drug development. Opioids may have mood-elevating effects, but exploration of drugs acting at opioid receptors as potential antidepressants has been limited (Tejedor-Real et al., 1995). Cerebral sigma receptors ( 1, 2) were identified initially as opioid receptors; their role remains obscure, but they may regulate release of norepinephrine, mediating the action of at least one agent, igmesine (JO-1784). Such agents might

lead to novel antidepressants (Maurice et al., 1996). Neurokinin-1 (NK1, substance P) antagonists also may have antidepressant effects (Swain and Rupniak, 1999). A series of more potent successors of lead agent MK-869 are under development (Nutt, 1998; Saria, 1999). Some neuroactive steroids that may have antidepressant or anxiolytic activity include agents that appear to act at NK1 receptors (Maurice et al., 1999). Another neuropeptide-based strategy for developing antidepressant or antianxiety drugs derives from pronounced behavioral effects of intracerebral administration of the large corticotrophin (ACTH)-releasing peptide (CRF). Observed responses include suggestions of fear or anxiety, increased startle response, loss of interest in food or sex, altered sleep, and eventually epileptic seizures. Small-molecule antagonists of receptors for CRF and related peptides can penetrate the blood-brain barrier and reverse these effects. A growing list of CRF1 receptorselective antagonists includes antialarmin, CP-154,526, SP-904, NBI-30545, DNP-606, DNP-695, CRA-1000, and SC241. Some of these agents also interact with CRF2 receptors, but no highly selective CRF 2 antagonists have been identified. The precise cerebral localization and function of the two CRF receptors should yield to exploration with site-selective ligands. Some CRF 1 antagonists are in clinical testing (Mansbach et al., 1997; McCarthy et al., 1999; Steckler and Holsboer, 1999). There also is a search for natural products for the treatment of depression and anxiety disorders (Wong et al., 1998). Hypericum or St. John's wort extracts have shown at least moderate antidepressant activity in some controlled trials (Philipp et al., 1999) but not in others (Shelton et al., 2001). At least 10 active agents are found in hypericum; among these, hypericin and hyperforin have some activity as inhibitors of amine transport in vitro (Neary and Bu, 1999). An active constituent of psychoactive South African Sceletium plants, mesembrine, also may have clinically useful properties (Smith et al., 1996). Another natural product is the autacoid metabolic product of L-methionine and ATP, S-adenosyl-L-methionine, or active methionine, a ubiquitous methyl donor. It, too, has shown mood-elevating effects in human subjects and is sold as a "nutriceutical" product (Baldessarini, 1987). Finally, there may be some evidence of beneficial effects of Ginkgo biloba extract in mild dementia, but the extract probably is ineffective in depressive illness (Lingaerde et al., 1999; Wong et al., 1998). Some of these agents can produce adverse interactions with other drugs and should not be considered innocuous (Fugh-Berman, 2000). New Treatments for Anxiety Disorders Innovative prospects for the treatment of anxiety disorders include extensions of the pharmacology of benzodiazepines (seeChapter 17: Hypnotics and Sedatives). Advances in a molecular understanding of the GABAA receptorbenzodiazepine receptor-Cl channel complex indicate that this ring-shaped collection of transmembrane proteins includes representatives of at least 16 subunit proteins in five groups ( , , , , (seeChapter 17: Hypnotics and Sedatives)); benzodiazepines are believed to bind to subunits and GABA to subunits. Various combinations of the subunits occur in different cell populations. This complexity may provide leads to receptor subtypeselective or even regionally selective agents with improved pharmacological properties. Ligands for specific benzodiazepine-receptor types include some nonbenzodiazepines. One, alpidem, an imidazole pyridine, has useful anxiolytic activity in human beings, but hepatic toxicity prompted its discontinuation. Alternatively, some benzodiazepine derivatives have been found to have central anticholecystokinin activity; cholecystokinin has been implicated as a biological substrate for anxiety, and antagonists have been proposed as potential antianxiety agents (Browne and Shaw, 1991). A particularly encouraging approach is the development and clinical testing of benzodiazepinereceptor ligands with agonist activity intermediate between a full agonist, such as diazepam, and an

antagonist, such as flumazenil (seeChapter 17: Hypnotics and Sedatives; Browne and Shaw, 1991 Potokar and Nutt, 1994). Benzodiazepines and -carbolines can have various agonist, partialagonist, inverse-agonist (reduce GABA effects on Cl influx), and antagonist (block full, partial, and inverse agonists) actions. Some with partial-agonist activity appear to have useful antianxiety effects with low risks of excessive sedation and cognitive impairment or tolerance and dependence. Alpidem is a partial agonist; other examples of benzodiazepine partial agonists include the imidazole benzodiazepines bretazenil and imidazenil. Bretazenil reportedly shows antipanic activity even when taken intermittently, with low abuse potential or risk of dependence. Other partial agonists that are not benzodiazepine derivatives include the -carboline abecarnil and the heterocyclic pazinaclone. Abecarnil also is selective for particular benzodiazepine-receptor subtypes. Elucidation of a growing number of serotonin-receptor subtypes and agents that interact with them has strongly encouraged development of additional psychotropic agents acting on the serotonin system. One approach includes further development of azapirone analogs as 5-HT1A receptor ligands. Another is the use of 5-HT3 receptor antagonists; some of these modulate dopamine synthesis and release, and others have shown properties in animal tests that suggest antianxiety activity. Agents with anti-5-HT3selective activity include the short-term antiemetic compound ondansetron and the benzamide zacopride; many others are known but have been subjected to only limited clinical testing in psychiatric disorders including psychosis and anxiety. Other approaches to the pharmacotherapy of anxiety disorders have included the use of antiadrenergic compounds usually employed for hypertension or other cardiovascular indications, including the -adrenergic receptor antagonists propranolol and atenolol and the 2-receptor agonist clonidine (Cooper et al., 1990; seeChapters 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists and 33: Antihypertensive Agents and the Drug Therapy of Hypertension). Such compounds have not proven to be effective in severe anxiety disorders, but they may modify autonomic expression of situational phobias such as performance anxiety (Rosenbaum and Pollock, 1994). A new technical aspect of the study of antianxiety agents has been the introduction of various laboratory procedures that can induce panic-like symptoms in a controlled setting as a basis for testing new antipanic treatments (Sullivan et al., 1999). It is reasonable to anticipate that the expansion of novel macromolecular target sites for CNS-active drug development may lead to innovative principles and agents for treating depressive and anxiety disorders in the future. For further discussion of psychiatric disorders, seeChapter 371 in Harrison's Principles of Internal Medicine, 16th ed., McGraw-Hill, New York, 2005.

Chapter 20. Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania
Overview Clinically effective antipsychotic agents include tricyclic phenothiazines, thioxanthenes, and dibenzepines, as well as butyrophenones and congeners, other heterocyclics, and experimental benzamides. Virtually all of these drugs block D2-dopamine receptors and reduce dopamine

neurotransmission in forebrain; some also interact with D1- and D4-dopaminergic, 5-HT2A- and 5HT2C-serotonergic, and -adrenergic receptors. Antipsychotic drugs are relatively lipophilic, metabolized mainly by hepatic oxidative mechanisms, and some have complex elimination kinetics. These drugs offer effective palliative treatment of both organic and idiopathic psychotic disorders with acceptable safety and practicality. Antipsychotic agents of high potency tend to have more adverse extrapyramidal neurological effects, and low-potency agents induce more sedative, hypotensive, and autonomic side effects. Characteristic neurological side effects of typical or "neuroleptic" antipsychotic agents include dystonia, akathisia, bradykinesia, and acute or late dyskinesias. Several antipsychotic agents, including clozapine, olanzapine, quetiapine, and low doses of risperidone, have limited extrapyramidal side effects and so are considered "atypical." Treatment of acute psychotic illness typically involves daily doses up to the equivalent of 10 to 20 mg of fluphenazine or haloperidol (at serum concentrations of about 5 to 20 ng/ml) or 300 to 600 mg of chlorpromazine; higher doses usually are not more effective but increase risks of adverse effects. Long-term maintenance treatment usually requires lower doses, and tolerance virtually is unknown. The treatment of mania and recurrences of mania and depression in bipolar disorder for several decades had been based mainly on the use of lithium carbonate or citrate. The therapeutic index of lithium is low, and close control of serum concentrations is required for its safe clinical application. Antipsychotic agents commonly are used to control acute or psychotic mania, and potent sedativeanticonvulsant benzodiazepines (see Chapter 17: Hypnotics and Sedatives) also are used adjunctively in acute mania. Additional commonly used alternative or adjunctive treatments for mania include the anticonvulsants sodium divalproex and carbamazepine and other experimental agents (see Chapter 21: Drugs Effective in the Therapy of the Epilepsies). Drugs Used in the Treatment of Psychoses Several classes of drugs are effective in the symptomatic treatment of psychiatric disorders. They are most appropriately used in the therapy of schizophrenia, the manic phase of bipolar (manicdepressive) illness, and other acute idiopathic psychotic illnesses or conditions marked by severe agitation. They also are used as an alternative to electroconvulsive therapy (ECT) in severe depression with psychotic features and sometimes in the management of patients with organic psychotic disorders. Effective antipsychotic agents include phenothiazines, structurally similar thioxanthenes, and benzepines; butyrophenones (phenylbutylpiperidines) and diphenylbutylpiperidines; and indolones and other heterocyclic compounds. Since these chemically dissimilar drugs share many properties, information about their pharmacology and clinical uses is presented for the group as a whole. Particular attention is paid to chlorpromazine, the oldest representative of the phenothiazine thioxanthene group of antipsychotic agents, and haloperidol, the original butyrophenone and representative of several related classes of aromatic butylpiperidine derivatives. Many patients have been treated with the antipsychotic agents since their introduction in the 1950s. Although they have had a revolutionary, beneficial impact on medical and psychiatric practice, their liabilities, especially the almost relentless association of older, typical or "neuroleptic" agents with extrapyramidal neurological effects, also must be emphasized. Newer antipsychotics are atypical in having less risk of extrapyramidal side effects, but some of them produce hypotension, seizures, weight gain, diabetes, hyperprolactinemia, and other adverse effects.

Tricyclic Antipsychotic Agents Antipsychotic agents are used primarily in the management of patients with psychotic or other serious psychiatric illnesses marked by agitation and impaired reasoning. Several dozen antipsychotic drugs are used in psychiatric conditions worldwide; still others are marketed primarily for other uses, including antiemetic and antihistaminic effects. The term neuroleptic has taken on connotations, at least in the United States, of relatively prominent experimental and clinical antagonism of D2-dopamine receptor activity, with substantial risk of extrapyramidal side effects. The term atypical antipsychotic has been used to describe agents that are associated with substantially lower risks of adverse extrapyramidal effects. Representative examples include clozapine, olanzapine, quetiapine, and low doses of risperidone (Blin, 1999; Markowitz et al., 1999). History The history of the antipsychotic agents is well summarized by Swazey (1974) and Caldwell (1978). In the early 1950s, some antipsychotic effects were obtained with extracts of the Rauwolfia plant and then with large doses of the purified active alkaloid reserpine, which was later chemically synthesized by Woodward. Although reserpine and related compounds that share its ability to deplete monoamines from their vesicular storage sites in neurons exert antipsychotic effects, these are relatively weak and are typically associated with severe side effects, including sedation, hypotension, diarrhea, anergy, and depressed mood. Thus, the clinical utility of reserpine primarily has been as an antihypertensive agent (seeChapter 33: Antihypertensive Agents and the Drug Therapy of Hypertension). Phenothiazine compounds were synthesized in Europe in the late nineteenth century as part of the development of aniline dyes such as methylene blue. In the late 1930s, a phenothiazine derivative, promethazine, was found to have antihistaminic and sedative effects. Attempts to treat agitation in psychiatric patients with promethazine and other antihistamines followed in the 1940s, but with little success. Meanwhile, the ability of promethazine to prolong barbiturate sleeping time in rodents was discovered, and the drug was introduced into clinical anesthesia as a potentiating and autonomic stabilizing agent (Laborit et al., 1952). This work prompted a search for other phenothiazine derivatives with anesthesia-potentiating actions, and in 19491950 Charpentier synthesized chlorpromazine. Soon thereafter, Laborit and his colleagues described the ability of this compound to potentiate anesthetics and produce "artificial hibernation." Chlorpromazine by itself did not cause a loss of consciousness but diminished arousal and motility, with some tendency to promote sleep. These central actions became known as ataractic or neuroleptic soon thereafter. The first attempts to treat mental illness with chlorpromazine were made in Paris in 1951 and early 1952 by Paraire and Sigwald (seeSwazey 1974). In 1952, Delay and Deniker became convinced that chlorpromazine achieved more than symptomatic relief of agitation or anxiety and that it had an ameliorative effect upon psychotic processes in diverse disorders. In 1954, Lehmann and Hanrahan in Montreal, followed by Winkelman in Philadelphia, reported the initial use of chlorpromazine in North America for the treatment of psychomotor excitement and manic states as well as schizophrenia (seeSwazey, 1974). Clinical studies soon revealed that chlorpromazine was effective in the treatment of psychotic disorders of various types. Chemistry and StructureActivity Relationships

This topic is reviewed in detail elsewhere (Baldessarini, 1985; Neumeyer and Booth, 2001). Phenothiazines have a three-ring structure in which two benzene rings are linked by a sulfur and a nitrogen atom (seeTable 201). If the nitrogen at position 10 is replaced by a carbon atom with a double bond to the side chain, the compound is a thioxanthene. Substitution of an electron-withdrawing group at position 2 increases the efficacy of phenothiazines and other tricyclic congeners (e.g., chlorpromazinevs.promazine). The nature of the substituent at position 10 also influences pharmacological activity. As can be seen in Table 201, the phenothiazines and thioxanthenes can be divided into three groups on the basis of substitution at this site. Those with an aliphatic side chain include chlorpromazine and triflupromazine among the phenothiazines; these compounds are relatively low in potency (but not in clinical efficacy). Those with a piperidine ring in the side chain include thioridazine and mesoridazine. There is a somewhat lower incidence of extrapyramidal side effects with this substitution, possibly due to increased central antimuscarinic activity. Several potent phenothiazine antipsychotic compounds have a piperazine group in the side chain; fluphenazine and trifluoperazine are examples. Use of these potent compounds, most of which have relatively weak anticholinergic activity, entails a greater risk of inducing extrapyramidal side effects but less tendency to produce sedation or autonomic side effects, such as hypotension, unless unusually large doses are employed. Several piperazine phenothiazines have been esterified at a free hydroxyl group with long-chain fatty acids to produce slowly absorbed and hydrolyzed, long-acting, highly lipophilic prodrugs. The decanoates of fluphenazine and haloperidol and enanthate of fluphenazine are used commonly in the United States, and several others are available internationally. Thioxanthenes also have aliphatic or piperazine side-chain substituents. The analog of chlorpromazine among the thioxanthenes is chlorprothixene. Piperazine-substituted thioxanthenes include clopenthixol, flupentixol, piflutixol, and thiothixene; they are all potent and effective antipsychotic agents, although only thiothixene is available in the United States. Since thioxanthenes have an olefinic double bond between the central-ring carbon atom at position 10 and the side chain, geometric isomers exist; the cis (or ) isomers are the more active. The antipsychotic phenothiazines and thioxanthenes have three carbon atoms interposed between position 10 of the central ring and the first amino nitrogen atom of the side chain at this position; the amine is always tertiary. Antihistaminic phenothiazines (e.g., promethazine) or strongly anticholinergic phenothiazines (e.g., ethopropazine, diethazine) have only two carbon atoms separating the amino group from position 10 of the central ring. Metabolic N-dealkylation of the side chain or increasing the size of amino N-alkyl substituents reduces antipsychotic activity. Additional tricyclic antipsychotic agents are the benzepines, containing a seven-member central ring, of which loxapine (a dibenzoxazepine; seeTable 201) and clozapine (a dibenzodiazepine) are available in the United States. Loxapine-like agents include potent and typical neuroleptics with prominent antidopaminergic activity (e.g., clothiapine, metiapine, loxapine, zotepine, and others). They have an electron-withdrawing moiety at position 2, relatively close to the side-chain nitrogen atoms. Clozapine-like agents either lack a ring substituent (e.g., quetiapine, a dibenzothiazepine), have an analogous methyl substituent (notably olanzapine, a thienobenzodiazepine; seeTable 201), or have an electronegative substituent at position 8, away from the side-chain nitrogen atoms (e.g., clozapine, fluperlapine, and others). In addition to dopamine receptors, clozapine-like agents interact at several other classes of receptors with varying affinities ( 1- and 2-adrenergic, serotonin 5-HT2A and 5-HT2C, muscarinic cholinergic, histamine H 1, and others). Some are highly effective antipsychotic agents, and clozapine, in particular, has proved effective even in chronically ill

patients who respond poorly to standard neuroleptics. The basic and clinical pharmacology of clozapine is reviewed elsewhere (Baldessarini and Frankenburg, 1991; Wagstaff and Bryson, 1995; Worrell et al., 2000). Clozapine strongly stimulated searches for additional, safer agents with antipsychotic activity and an atypically low risk of extrapyramidal neurological side effects. This search led to a series of atypical antipsychotic agents with some pharmacological similarities to clozapine. These include the structurally similar olanzapine and quetiapine, and the mixed antidopaminergic-antiserotonergic agent risperidone (a benzisoxazole; seeTable 201; Owens and Risch, 1998; Waddington and Casey, 2000). The butyrophenone (phenylbutylpiperidine) neuroleptics include haloperidol (Janssen, 1974). Other experimental heterocyclic-substituted phenylbutylpiperidines include the spiperones. An analogous compound, droperidol, is a very short-acting, highly sedative neuroleptic that is used almost exclusively in anesthesia (seeChapter 14: General Anesthetics) but sometimes also in psychiatric emergencies. Additional analogs in the diphenylbutylpiperidine series include fluspirilene, penfluridol, and pimozide (seeTable 201 and Neumeyer and Booth, 2001). These are potent neuroleptics with prolonged action. In the United States, pimozide is indicated for the treatment of Tourette's syndrome of severe tics and involuntary vocalizations, although it also is an effective antipsychotic. Several other classes of heterocyclic compounds have antipsychotic effects, but too few are available or sufficiently well characterized to permit conclusions regarding structureactivity relationships (seeNeumeyer and Booth, 2001). These include several indole compounds [notably, molindone (seeTable 201) and oxypertine]. Another experimental compound, butaclamol, is a potent antidopaminergic agent that has a pentacyclic structure with a dibenzepine core and structural and pharmacological similarity to loxapine-like rather than clozapine-like dibenzepines. Its active (dextrorotatory) and inactive enantiomeric forms have been useful in characterizing the stereochemistry of sites of action of neuroleptics at dopamine receptors. Risperidone (seeTable 201) has prominent antiserotonergic (5-HT2) as well as antidopaminergic (D2) and antihistaminic (H 1) activity. Although risperidone and clozapine share those receptor affinities, risperidone is a much more potent antidopaminergic agent and, unlike clozapine, can induce extrapyramidal symptoms as well as prominent hyperprolactinemia. Nevertheless, risperidone can be considered a "quantitatively atypical" antipsychotic agent in that its extrapyramidal neurological side effects are limited at low daily doses (6 mg or less). A growing series of heterocyclic antipsychotic agents are the enantiomeric, substituted benzamides. These include the gastroenterologic agents metoclopramide and cisapride, which have antiserotonergic as well as antiD2-dopaminergic actions. In addition, several benzamides, like the butyrophenones and their congeners, are relatively selective antagonists at central D2dopamine receptors, and many have neuroleptic-antipsychotic activity. Experimental examples include epidepride, eticlopride, nemonapride, raclopride, remoxipride, and sultopride; sulpiride is employed clinically in other countries, mainly as a sedative. Pharmacological Properties Antipsychotic drugs share many pharmacological effects and therapeutic applications (seeBaldessarini, 1985; Marder, 1998; Owens and Risch, 1998). Chlorpromazine and haloperidol are commonly taken as prototypes for the older, standard neuroleptic-type agents; newer agents can be compared and contrasted to them. Many antipsychotic drugs, especially chlorpromazine and

other agents of low potency, have a prominent sedative effect. This is particularly conspicuous early in treatment, although tolerance to this effect is typical; sedation may not be noticeable when very agitated psychotic patients are treated. Despite their sedative effects, neuroleptic drugs generally are not used to treat anxiety disorders, largely because of their autonomic and neurological side effects, which paradoxically can include severe anxiety and restlessness (akathisia). The risk of developing extrapyramidal side effects including tardive dyskinesia following long-term administration of neuroleptic drugs makes these agents less desirable than others for the treatment of anxiety. The term neuroleptic was introduced to denote the effects of chlorpromazine and reserpine on the behavior of laboratory animals and in psychiatric patients and was intended to contrast their effects to those of sedatives and other CNS depressants. The neuroleptic syndrome involves suppression of spontaneous movements and complex behaviors, while spinal reflexes and unconditioned nociceptive-avoidance behaviors remain intact. In human beings, neuroleptic drugs reduce initiative and interest in the environment as well as manifestations of emotion or affect. Such effects led to their being considered "tranquilizers" before their unique antipsychotic effects were well established. In their clinical use, there may be some initial slowness in response to external stimuli and drowsiness. However, subjects are easily aroused, can answer questions, and retain intact cognition. Ataxia, incoordination, or dysarthria do not occur at ordinary doses. Typically, psychotic patients soon become less agitated, and withdrawn or autistic patients sometimes become more responsive and communicative. Aggressive and impulsive behavior diminishes. Gradually (usually over a period of days), psychotic symptoms of hallucinations, delusions, and disorganized or incoherent thinking tend to disappear. Neuroleptic agents also exert characteristic neurological effectsincluding bradykinesia, mild rigidity, some tremor, and subjective restlessness (akathisia)that resemble the signs of Parkinson's disease. Although early use of the term neuroleptic appears to have encompassed the whole unique syndrome just described and neuroleptic still is used as a synonym for antipsychotic, there now is a tendency to use the term neuroleptic to emphasize the more neurological aspects of the syndrome (i.e., the parkinsonian and other extrapyramidal effects). Except for clozapine and perhaps olanzapine and quetiapine, all antipsychotic drugs available in the United States also have effects on movement and posture and can thus be called neuroleptic. However, the more general term antipsychotic is preferable. Introduction of atypical drugs such as clozapine, olanzapine, and quetiapine that are antipsychotic and have little extrapyramidal action has reinforced this trend. General Psychophysiological and Behavioral Effects In laboratory animals and in human beings, the most prominent observable effects of many antipsychotic agents are strikingly similar (Fielding and Lal, 1978). In low doses, operant behavior is reduced but spinal reflexes are unchanged. In laboratory animals, exploratory behavior is diminished, and responses to a variety of stimuli are fewer, slower, and smaller in magnitude, although the ability to discriminate stimuli is retained. Conditioned avoidance behaviors are selectively inhibited, whereas unconditioned escape or avoidance responses are not. Highly reinforcing self-stimulation of the animal brain (commonly induced with electrodes placed in the monoamine-rich medial forebrain bundle) is blocked, although capacity to press the stimulationinducing lever is not lost. Behavioral activation, stimulated environmentally or pharmacologically (particularly by stimulants and dopaminergic agonists), is blocked. Feeding is inhibited. Most neuroleptics block the emesis, hyperactivity, and aggression induced by apomorphine and other dopaminergic agonists. In high doses, most neuroleptics induce characteristic cataleptic immobility that allows an animal to be placed in abnormal postures that persist. Muscle tone is increased, and ptosis is typical. The animal appears to be indifferent to most stimuli, although it continues to withdraw from those that are noxious or painful. Many learned tasks still can be performed if

sufficient stimulation and motivation are provided. Even very high doses of most neuroleptics do not induce coma, and the lethal dose is extraordinarily high. Effects on Motor Activity Nearly all antipsychotic agents diminish spontaneous motor activity in laboratory animals and in human beings. However, one of the more disturbing clinical side effects of these agents is akathisia, which is manifest by an increase in restless activity that is not readily mimicked by animal behavior. The cataleptic immobility of animals treated with neuroleptics resembles the catatonia seen in some psychotic patients as well as in association with a variety of metabolic and neurological disorders affecting the central nervous system (CNS). In patients, catatonic signs, along with other features of psychotic illnesses, are sometimes relieved by antipsychotic agents. However, rigidity and bradykinesia, which mimic catatonia, can be induced in patients, especially by large doses of potent neuroleptics, and reversed by removal of the offending drug or the addition of an antiparkinsonian agent (seeFielding and Lal, 1978; Janssen and Van Bever, 1978). Theories concerning the mechanisms underlying these extrapyramidal reactions, as well as descriptions of their clinical presentations and management, are given below. Effects on Sleep Antipsychotic drugs have inconsistent effects on sleep patterns, but tend to normalize sleep disturbances characteristic of many psychoses and mania. Ability to prolong and enhance the effect of opioid and hypnotic drugs appears to parallel the sedative rather than the neuroleptic potency of a particular agent. Thus, the more potent neuroleptic agents that do not cause drowsiness also do not enhance hypnosis produced by other drugs. Effects on Conditioned Responses Chlorpromazine and other neuroleptics impair the ability of animals to make a conditioned avoidance response to a learned sensory cue that signals the onset of punishing shock avoidable by moving to a safe place in an experimental chamber. Under the influence of small doses of these drugs, animals ignore the warning signal but still attempt to escape once the shock is applied. General CNS depressants affect both avoidance (the conditioned response) and escape (the unconditioned response) to approximately the same extent, but suppression of unconditioned escape occurs only with doses of neuroleptics that produce ataxia or hypnosis. Passive avoidance behavior, requiring immobility, also is suppressed by neuroleptic drugs, in contrast to what might be expected of drugs that suppress locomotion. Since correlations between antipsychotic effectiveness and conditioned avoidance tests are good for many types of neuroleptic agents, they have been important in pharmaceutical screening procedures. However, despite their empirical utility and quantitative characteristics, effects on conditioned avoidance have not provided important insights into the basis of clinical antipsychotic effects. For example, the effects of neuroleptic drugs on conditioned avoidance, but not their clinical antipsychotic actions, are subject to tolerance and are blocked by anticholinergic agents. Moreover, the extraordinarily close correlation between the potencies of drugs in conditioned avoidance tests and their ability to block the behavioral effects of dopaminergic agonists such as amphetamine or apomorphine suggests that such avoidance tests may be selective for drugs with extrapyramidal and other neurological effects. The ability of the atypical antipsychotic drugs, such as clozapine and olanzapine, to antagonize dopamine agonists and to block conditioned avoidance responses in animal behavioral tests also supports this interpretation (seeFielding and Lal, 1978;

Janssen and Van Bever, 1978; Arnt and Skarsfeldt, 1998). Effects on Complex Behavior Antipsychotic drugs can impair vigilance or motor responses in human subjects performing a variety of tasks, such as continuous rotor-pursuit and tapping-speed tests. The drugs produce relatively little impairment of digitsymbol substitution, a test of intellectual functioning. In contrast, barbiturates cause greater impairment in performance in digitsymbol substitution than in continuous performance and other vigilance tests. Moreover, most antipsychotic agents can improve cognitive functioning in psychotic patients with symptomatic improvement. Effects on Specific Areas of the Nervous System Effects of antipsychotic drugs are apparent at all levels in the nervous system. Although knowledge of the actions underlying the antipsychotic and many of the neurological effects of neuroleptic drugs remains incomplete, theories based on their ability to antagonize the actions of dopamine as a neurotransmitter in the basal ganglia and limbic portions of the forebrain have become most prominent and are supported by a large body of data. Cerebral Cortex Since psychosis involves a disorder of higher functions and thought processes, cortical effects of antipsychotic drugs are of great interest. Antipsychotic drugs interact with dopaminergic projections to the prefrontal and deep-temporal (limbic) regions of the cerebral cortex with relative sparing of these areas from adaptive changes in dopamine metabolism that would suggest tolerance to the actions of neuroleptics (Bunney et al., 1987). Seizure Threshold Many neuroleptic drugs can lower the seizure threshold and induce discharge patterns in the electroencephalogram (EEG) that are associated with epileptic seizure disorders. Clozapine as well as aliphatic phenothiazines with low potency (such as chlorpromazine) seem particularly able to do this, while the more potent piperazine phenothiazines and thioxanthenes (notably fluphenazine and thiothixene), as well as risperidone, seem much less likely to have this effect (Itil, 1978; Baldessarini and Frankenburg, 1991). The butyrophenones have variable and unpredictable effects that cause seizure activity; molindone may have the least activity of this type. Clozapine has a clearly dose-related risk of inducing seizures in nonepileptic patients (Baldessarini and Frankenburg, 1991), and clozapine and olanzapine are associated with more EEG abnormalities than are many high-potency neuroleptics, including risperidone (Centorrino et al., 2001). Antipsychotic agents, especially clozapine and low-potency phenothiazines and thioxanthenes, should be used with extreme caution, if at all, in untreated epileptic patients and in patients undergoing withdrawal from central depressants such as alcohol, barbiturates, or benzodiazepines. Most antipsychotic drugs, especially the piperazines, as well as the novel atypical agents quetiapine and risperidone, can be used safely in epileptic patients if moderate doses are attained gradually and if concomitant anticonvulsant drug therapy is maintained (seeChapter 21: Drugs Effective in the Therapy of the Epilepsies). Basal Ganglia Because the extrapyramidal effects of most clinically used antipsychotic drugs are prominent, a great deal of interest has centered on the actions of these drugs in the basal ganglia, notably the

caudate nucleus, putamen, globus pallidus, and allied nuclei, which play a crucial role in the control of posture and the extrapyramidal aspects of movement. The critical role of a deficiency of dopamine in this region in the pathogenesis of Parkinson's disease, the potent activity of neuroleptics as antagonists of dopamine receptors, and the striking resemblance between clinical manifestations of Parkinson's disease and the neurological effects of neuroleptic drugs all have focused attention on the role of a deficiency of dopaminergic activity in some of the neurolepticinduced extrapyramidal effects (Carlsson, 1990). The hypothesis that interference with the transmitter function of dopamine in the mammalian forebrain might contribute to the neurological and possibly also the antipsychotic effects of the neuroleptic drugs arose from the observation that neuroleptic drugs consistently increased the concentrations of the metabolites of dopamine but had variable effects on the metabolism of other neurotransmitters. The importance of dopamine also was supported by histochemical studies, which indicated a preferential distribution of dopamine-containing fibers between midbrain and the basal ganglia (notably, the nigrostriatal tract), and within the hypothalamus (seeChapter 12: Neurotransmission and the Central Nervous System). Other dopamine-containing neurons project from midbrain tegmental nuclei to forebrain regions associated with the limbic system as well as to temporal and prefrontal cerebral cortical areas closely related to the limbic system. A simplistic but attractive concept arose: many extrapyramidal neurological effects of the antipsychotic drugs might be mediated by antidopaminergic effects in the basal ganglia. Their antipsychotic effects might be mediated by modification of dopaminergic neurotransmission in the limbic, mesocortical, and hypothalamic systems. Antagonism of dopamine-mediated synaptic neurotransmission is an important action of neuroleptic drugs (Carlsson, 1990). Thus, drugs with neuroleptic actions, but not their inactive congeners, initially increase the rate of production of dopamine metabolites, the rate of conversion of the precursor amino acid tyrosine to dihydroxyphenylalanine (DOPA) and its metabolites, and the rate of firing of dopamine-containing cells in the midbrain. These effects usually have been interpreted to represent adaptive responses of neuronal systems that tend to reduce the impact of interrupting synaptic transmission at dopaminergic terminals in the forebrain. Supporting evidence for such an interpretation includes the observation that small doses of neuroleptic drugs block behavioral or neuroendocrine effects of systemically administered or intracerebrally injected dopaminergic agonists. An example is stereotyped gnawing behavior in the rat induced by apomorphine. Many neuroleptic drugs (except the butyrophenones, their congeners, and the benzamides) also block the effects of agonists on dopamine-sensitive adenylyl cyclase associated with D1-dopamine receptors in forebrain tissue (Figure 201). Atypical antipsychotic drugs such as clozapine and quetiapine are characterized by their low affinity or weak actions in such tests (Campbell et al., 1991). Whereas the initial effect of neuroleptics is to block D2 receptors and stimulate increased firing and metabolic activity in dopamine neurons, these responses eventually are replaced by diminished activity ("depolarization inactivation"), particularly in the extrapyramidal basal ganglia (Bunney et al., 1987). The timing of these adaptive changes correlates well with the gradual evolution of parkinsonian bradykinesia over days in the clinical application of neuroleptics (Tarsy et al., 2001). Figure 201. Sites of Action of Neuroleptics and Lithium. In varicosities ("terminals") along terminal arborizations of dopamine (DA) neurons projecting from midbrain to forebrain, tyrosine is oxidized to dihydroxyphenylalanine (DOPA) by tyrosine hydroxylase (TH), the rate-limiting step in catecholamine biosynthesis, then decarboxylated to DA by aromatic L-amino acid decarboxylase (AAD) and stored in vesicles. Following exocytotic release (inhibited by lithium)

by depolarization in the presence of Ca2+, DA interacts with postsynaptic receptors (R) of D1 and D2 types (and structurally similar but less prevalent D1like and D2-like receptors), as well as with presynaptic D2 and D3 autoreceptors. Inactivation of transsynaptic communication occurs primarily by active transport ("reuptake") of DA into presynaptic terminals (inhibited by many stimulants), with secondary deamination by mitochondrial monoamine oxidase (MAO). Postsynaptic D1 receptors, through Gs-type G proteins, activate adenylyl cyclase (AC) to convert ATP to cyclic AMP (cAMP), whereas D2 receptors inhibit AC + through Gi proteins. D2 receptors also activate receptor-operated K channels, 2+ suppress voltage-gated Ca currents, and stimulate phospholipase-C (PLC), perhaps via the subunits liberated from activated Gi (seeChapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect), to convert phosphatidylinositol bisphosphate (PIP2) to inositol trisphosphate (IP3) and diacylglycerol (DAG), with secondary modulation of Ca2+ and protein kinases. Lithium inhibits the phosphatase that liberates inositol (I) from inositol phosphate (IP). Both Li+ and valproate can modify the abundance or function of G proteins and effectors, as well as protein kinases and several cell and nuclear regulatory factors. D2-like autoreceptors suppress synthesis of DA by diminishing phosphorylation of rate-limiting TH, as well as limiting DA release (possibly through modulation of Ca 2+ or K+ currents). In contrast, presynaptic A2adenosine receptors (A2R) activate AC and, via cyclic AMP production, TH activity. Nearly all antipsychotic agents block D2 receptors and autoreceptors; some also block D1 receptors (seeTable 202). Initially in antipsychotic treatment, DA neurons activate and release more DA but, following repeated treatment, they enter a state of physiological depolarization inactivation, with diminished production and release of DA, in addition to continued receptor blockade.

Radioligand-binding assays for dopamine receptor subtypes have been used to define more precisely the mechanism of action of neuroleptic agents (seeCivelli et al., 1993; Baldessarini and Tarazi, 1996; Neve and Neve, 1997; seeTable 202 and Figure 201). Estimates of the clinical potency of most types of antipsychotic drugs correlate well with their relative potency in vitro to inhibit binding of these ligands to D2-dopamine receptors (seeChapter 12: Neurotransmission and the Central Nervous System). This correlation is obscured to some extent by the tendency of neuroleptics to accumulate in brain tissue to different degrees (Tsuneizumi et al., 1992; Cohen et al., 1992). Nevertheless, almost all clinically effective antipsychotic agents (with the notable exception of clozapine and quetiapine) have characteristically high affinity for D2 receptors. Although some antipsychotics (especially thioxanthenes, phenothiazines, and clozapine) bind with relatively high affinity to D1 receptors, they also block D2 receptors and other D 2-like receptors including the D3- and D4-receptor subtypes (Sokoloff et al., 1990; Van Tol et al., 1991; Baldessarini and Tarazi, 1996; Tarazi and Baldessarini, 1999). Butyrophenones and congeners (e.g., haloperidol, pimozide, N-methylspiperone) as well as experimental benzamide neuroleptics (e.g., eticlopride, nemonapride, raclopride, remoxipride) have relatively high selectivity as antagonists at D2 and D 3 dopamine receptors, with variable D4 affinity. The physiological and clinical consequences of selectively blocking D1 or D5 receptors remain obscure, although experimental benzazepines (e.g., SCH-23390 and SCH-39166 or ecopipam) with such properties, but apparently weak antipsychotic effects, are known (Daly and Waddington, 1992; Kebabian et al., 1997). Atypical antipsychotic agents with a low risk of extrapyramidal side effects, such as clozapine and other benzepines, have low affinity for D2-dopamine receptors and little propensity to produce

extrapyramidal side effects. They are, however, active 1-adrenergic antagonists, as are many other antipsychotic agents (Baldessarini et al., 1992). This action may contribute to sedative and hypotensive side effects or might underlie useful psychotropic effects, although assessment of the psychotropic potential of centrally active antiadrenergic agents is limited. Many antipsychotic agents also have some affinity for 5-HT2A -serotonin receptors, and this is particularly prominent in the case of clozapine, olanzapine, quetiapine, risperidone, and other investigational D 2/5-HT2A antagonists (Chouinard et al., 1993; Leysen et al., 1994; see also Chapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists). This admixture of moderate affinities for several CNS receptor types (including also muscarinic acetylcholine and H1-histamine receptors) may contribute to the virtually unique pharmacological profile of the atypical antipsychotic agent clozapine (Baldessarini and Frankenburg, 1991). Clozapine also has modest selectivity for dopamine D4 receptors over other dopamine-receptor types. D4 receptors, preferentially localized in cortical and limbic brain regions, are upregulated after repeated administration of clozapine and other typical and atypical antipsychotic drugs. These receptors may contribute to the clinical actions of antipsychotic drugs, although agents that are selective D4 or mixed D4/5-HT2A antagonists have not proved effective in the treatment of psychotic patients (Baldessarini, 1997; Kramer et al., 1997; Tarazi and Baldessarini, 1999; Truffinet et al., 1999; see"Prospectus," below). Limbic System Dopaminergic projections from the midbrain terminate on septal nuclei, the olfactory tubercle, the amygdala, and other structures within the temporal and prefrontal lobes of the cerebrum. Because of the dopamine hypothesis just reviewed, much attention also has been given to the mesolimbic and mesocortical systems as possible sites of mediation of some of the antipsychotic effects of these agents. Speculations about the pathophysiology of the idiopathic psychoses, such as schizophrenia, have for many years centered on the limbic system. Such speculation has been given indirect encouragement by repeated "natural experiments" that have associated psychotic mental phenomena with lesions of the temporal lobe and other portions of the limbic system. The finding that D3 and D4 receptors are preferentially expressed in limbic areas of the CNS has led to increased efforts to identify agents selective for these receptors that might have antipsychotic efficacy with a reduced tendency to cause extrapyramidal side effects, so far without success (Kebabian et al., 1997; Kramer et al., 1997; Lahti et al., 1998; Tarazi and Baldessarini, 1999). Moreover, long-term administration of typical and atypical antipsychotic drugs does not alter D3 receptor levels in rat forebrain regions while increasing expression of D2 and D4 receptors (Tarazi et al., 1997). These findings suggest that D3 receptors are unlikely to play a pivotal role in antipsychotic drug actions, perhaps due to their avid affinity for endogenous dopamine, which may prevent their interaction with antipsychotics (Levant, 1997). Many of the behavioral, neurophysiological, biochemical, and pharmacological findings with regard to the properties of the dopaminergic system of the basal ganglia have been extended to mesolimbic and mesocortical tissue. Certain important effects of antipsychotic drugs are similar in extrapyramidal and limbic regions, including those on ligand-binding assays for dopaminergic receptors. However, the extrapyramidal and antipsychotic actions of antipsychotic agents differ in several ways. For example, while some acute extrapyramidal effects of neuroleptics tend to diminish or to disappear with time or when anticholinergic drugs are administered concurrently, this is not characteristic of the antipsychotic effects. Dopaminergic subsystems in the forebrain differ functionally and in the physiological regulation of their responses to drugs (seeBunney et al., 1987; Moore, 1987; Sulser and Robinson, 1978; Wolf and Roth, 1987). For example, anticholinergic agents block the increase in turnover of dopamine in the basal ganglia induced by neuroleptic agents but not in limbic areas containing dopaminergic terminals. Further, development of tolerance

to enhancement of the metabolic turnover of dopamine by antipsychotics is much less prominent in limbic than in extrapyramidal areas (see Carlsson, 1990). In Vivo Occupation of Cerebral Neurotransmitter Receptors Levels of occupation of dopamine receptors and other receptors in human brain can be estimated with positron emission tomography (PET) in patients treated with antipsychotic drugs. Such analyses not only support conclusions arising from laboratory studies of receptor occupancy (seeTable 202) but also assist in predicting clinical efficacy and extrapyramidal side effects as well as clinical dosing, even in advance of controlled clinical trials (Farde et al., 1995; Waddington and Casey, 2000). For example, occupation of more than 75% of D 2-like receptors in the basal ganglia is associated with risk of acute extrapyramidal side effects and is commonly found with clinical doses of typical neuroleptics (Farde et al., 1995). In contrast, therapeutic doses of clozapine usually are associated with lower levels of occupation of D2 receptors (averaging 40% to 50%), but higher (70% to 90%) levels of occupation of cortical 5-HT2 receptors (Kapur et al., 1999; Nordstrom et al., 1995). Of the novel atypical antipsychotics, only quetiapine has a clozapine-like in vivo receptoroccupancy profile, resembling clozapine's levels of occupation of both D2 (40% to 50%) and 5-HT2 receptors (50% to 70%) (Gefvert et al., 1998). Olanzapine and risperidone also block cortical 5-HT2 receptors at high levels (80% to 100%), with greater effects at D2 sites (typically, 50% to 90%) than have clozapine or quetiapine (Farde et al., 1995; Nordstrom et al., 1998; Kapur et al., 1999). In addition to its relatively high levels of D2-receptor occupation, olanzapine is more antimuscarinic than is risperidone, perhaps accounting for its lower risk of acute extrapyramidal effects (seeTables 201 and 202). Hypothalamus and Endocrine Systems In addition to neurological and antipsychotic effects that appear to be mediated in part by antidopaminergic actions of neuroleptic drugs, endocrine changes occur as a result of their effects on the hypothalamus or pituitary that also may involve dopamine. Prominent among these is the ability of most antipsychotic drugs to increase the secretion of prolactin. This effect on prolactin secretion probably is due to a blockade of the pituitary actions of the tuberoinfundibular dopaminergic system that projects from the arcuate nucleus of the hypothalamus to the median eminence. D2-dopaminergic receptors on mammotrophic cells in the anterior pituitary mediate the prolactin-inhibiting action of dopamine secreted at the median eminence into the hypophyseal portal system (seeBen-Jonathan, 1985; see alsoChapter 56: Pituitary Hormones and Their Hypothalamic Releasing Factors). Correlations between the potencies of antipsychotic drugs in stimulating prolactin secretion and causing behavioral effects are excellent for many types of agents (Sachar, 1978). Clozapine and quetiapine are exceptional in having minimal effects on prolactin (Arvanitis et al., 1997; Sachar, 1978), and olanzapine produces only minor, transient increases in prolactin levels (Tollefson and Kuntz, 1999), whereas risperidone has an unusually potent prolactin-elevating effect (Grant and Fitton, 1994). The effects of neuroleptics on prolactin secretion tend to occur, however, at lower doses than do their antipsychotic effects; this may reflect their action outside the bloodbrain barrier in the adenohypophysis. Little tolerance develops to the effect of antipsychotic drugs on prolactin, even after years of treatment. However, the effect is rapidly reversible when the drugs are discontinued (Bitton and Schneider, 1992). This effect of antipsychotic agents is presumed to be

responsible for the breast engorgement and galactorrhea that occasionally are associated with their use, sometimes even in male patients given high doses of neuroleptic agents. Because antipsychotic drugs are used chronically and thus cause prolonged hyperprolactinemia, there has been concern about their possible contribution to risk of carcinoma of the breast, although clinical evidence has not supported this concern (Dickson and Glazer, 1999; Mortensen, 1994). Nevertheless, neuroleptic and other agents that stimulate secretion of prolactin should be avoided in patients with established carcinoma of the breast, particularly with metastases. Some antipsychotic drugs reduce secretion of gonadotropins, estrogens, and progestins, possibly contributing to amenorrhea. The effects of neuroleptics on other hypothalamic neuroendocrine functions are much less well characterized, although neuroleptics inhibit the release of growth hormone and may reduce the secretion of corticotropin-releasing hormone (CRH) that occurs in response to stress. Neuroleptics also interfere with secretion of pituitary growth hormone. Nevertheless, neuroleptics are poor therapy for acromegaly, and there is no evidence that they retard growth or development of children. In addition, chlorpromazine can decrease secretion of neurohypophyseal hormones. Weight gain and increased appetite occur with most neuroleptics, particularly clozapine, others of low potency, and olanzapine. Chlorpromazine also may impair glucose tolerance and insulin release to a clinically appreciable degree in some patients (Erle et al., 1977). In addition, several atypical antipsychotic agents (notably clozapine, olanzapine, and quetiapine) have been associated with risk of new-onset type 2 diabetes that may not be accounted for entirely by weight gain (Wirshing et al., 1998). In addition to neuroendocrine effects, it is likely that other autonomic effects of antipsychotic drugs may be mediated by the hypothalamus. An important example is the poikilothermic effect of chlorpromazine and other neuroleptic agents, which impairs the body's ability to regulate temperature such that hypo- or hyperthermia may result, depending on the ambient temperature. Clozapine can induce elevations of body temperature. Brainstem Clinical doses of the antipsychotic agents usually have little effect on respiration. However, vasomotor reflexes mediated by either the hypothalamus or the brainstem are depressed by relatively low doses of chlorpromazine. This effect might occur at many points in the reflex pathway, and the net result is a centrally mediated fall in blood pressure. Even in cases of acute overdosage with suicidal intent, the antipsychotic drugs usually do not cause life-threatening coma or suppression of vital functions; this contributes importantly to their safety. In addition, haloperidol has been administered safely in doses exceeding 500 mg/24 hours intravenously to control agitation in delirious patients (Tesar et al., 1985). Chemoreceptor Trigger Zone (CTZ) Most neuroleptics protect against the nausea- and emesis-inducing effects of apomorphine and certain ergot alkaloids, all of which can interact with central dopaminergic receptors in the CTZ of the medulla. The antiemetic effect of most neuroleptics occurs with low doses. Drugs or other stimuli that cause emesis by an action on the nodose ganglion or locally on the gastrointestinal tract are not antagonized by antipsychotic drugs, but potent piperazines and butyrophenones are sometimes effective against nausea caused by vestibular stimulation. Autonomic Nervous System Since various antipsychotic agents have antagonistic interactions at peripheral, -adrenergic,

serotonin (5-HT2A), and histamine (H1) receptors, their effects on the autonomic nervous system are complex and unpredictable. Chlorpromazine, clozapine, and thioridazine have particularly significant -adrenergic antagonistic activity. The potent piperazine tricyclic neuroleptics (e.g., fluphenazine, trifluoperazine), as well as haloperidol and risperidone, have antipsychotic effects even when used in low doses and show little antiadrenergic activity in patients. The muscarinic-cholinergic blocking effects of antipsychotic drugs are relatively weak, but the blurring of vision commonly experienced with chlorpromazine may be due to an anticholinergic action on the ciliary muscle. Chlorpromazine regularly produces miosis, which can be due to adrenergic blockade. Other phenothiazines can cause mydriasis; this is especially likely to occur with clozapine or thioridazine, which are potent muscarinic antagonists. Chlorpromazine can cause constipation and decreased gastric secretion and motility, and clozapine can decrease the efficiency of clearing saliva and induce severe impairment of intestinal motility (Rabinowitz et al., 1996; Theret et al., 1995). Decreased sweating and salivation are additional manifestations of the anticholinergic effects of such drugs. Acute urinary retention is uncommon but can occur in males with prostatism. Anticholinergic effects are least frequently caused by the potent neuroleptics, including haloperidol and risperidone. The phenothiazines inhibit ejaculation without interfering with erection. Thioridazine produces this effect with some regularity, sometimes limiting its acceptance by male patients. Attribution of this effect to adrenergic blockade is logical but unsubstantiated, inasmuch as thioridazine is less potent than chlorpromazine in its antiadrenergic effects. Kidney and Electrolyte Balance Chlorpromazine may have weak diuretic effects in animals and human beings because of a depressant action on the secretion of antidiuretic hormone (ADH), inhibition of reabsorption of water and electrolytes by a direct action on the renal tubule, or both. The slight fall in blood pressure that occurs with chlorpromazine is not associated with a significant change in glomerular filtration rate; indeed, renal blood flow tends to increase. The syndrome of idiopathic polydipsia with potential hyponatremia has been improved with clozapine, presumably through CNS mechanisms (Siegel et al., 1998). Cardiovascular System The actions of chlorpromazine on the cardiovascular system are complex because the drug produces direct effects on the heart and blood vessels and also indirect actions through CNS and autonomic reflexes. Chlorpromazine and other low-potency or atypical antipsychotic agents can cause orthostatic hypotension, systolic blood pressure being affected more than diastolic. Tolerance usually develops to the hypotensive effect over several weeks. However, some degree of orthostatic hypotension may persist indefinitely, especially in elderly patients (Ray et al., 1987). Chlorpromazine and other phenothiazines with low potency can have a direct negative inotropic action and a quinidine-like antiarrhythmic effect on the heart. Electrocardiographic (ECG) changes include prolongation of the QT and PR intervals, blunting of T waves, and depression of the ST segment. Thioridazine, in particular, causes a high incidence of QT- and T-wave changes and may very rarely produce ventricular arrhythmias and sudden death. These effects are uncommon when potent antipsychotic agents are administered. Clozapine has been associated with rare cases of early carditis and later-appearing cardiomyopathy (Killian et al., 1999). Miscellaneous Pharmacological Effects

Interactions of antipsychotic drugs with central neurohumors other than dopamine may contribute to their antipsychotic effects or other actions. For example, many antipsychotics enhance the turnover of acetylcholine, especially in the basal ganglia, perhaps secondary to the blockade of inhibitory dopamine heteroceptors on cholinergic neurons. In addition, as discussed above, there is an inverse relationship between antimuscarinic potency of antipsychotic drugs in the brain and the likelihood of extrapyramidal effects (Snyder and Yamamura, 1977). Chlorpromazine and low-potency antipsychotic agents including clozapine have antagonistic actions at histamine receptors that probably contribute to their sedative effects. Antagonistic interactions also occur at serotonin-5HT2A receptors in the forebrain. The significance of this effect is not certain, but several antipsychotic agentsnotably risperidone, olanzapine, quetiapine, sertindole, and ziprasidone were developed in part to mimic the relatively potent and selective antagonistic activity of clozapine at serotonin-5-HT2A receptors (Ichikawa and Meltzer, 1999; Meltzer and Nash, 1991). Absorption, Distribution, Fate, and Excretion Some antipsychotic drugs tend to have erratic and unpredictable patterns of absorption, particularly after oral administration and even when liquid preparations are used. Parenteral (intramuscular) administration increases the bioavailability of active drug by four to ten times. The drugs are highly lipophilic, highly membrane- or protein-bound, and accumulate in the brain, lung, and other tissues with a high blood supply; they also enter the fetal circulation and breast milk. It is virtually impossible (and usually not necessary) to remove these agents by dialysis. The usually stated elimination half-lives with respect to total concentrations in plasma are typically 20 to 40 hours, but complex patterns of delayed elimination may occur with some agents, particularly the butyrophenones and their congeners (Cohen et al., 1992). The biological effects of single doses of most neuroleptics usually persist for at least 24 hours; this encourages the common practice of giving the entire daily dose at one time, once the patient has accommodated to the initial side effects of the drug. Elimination from the plasma may be more rapid than from sites of high lipid content and binding, notably in the CNS, but direct pharmacokinetic studies on this issue are few and inconclusive (Sedvall, 1992). Metabolites of some agents have been detected in the urine for as long as several months after administration of the drug has been discontinued. Slow removal of drug may contribute to the typically slow rate of exacerbation of psychosis after stopping drug treatment. Repository ("depot") preparations of esters of neuroleptic drugs are absorbed and eliminated much more slowly than are oral preparations. For example, whereas half of an oral dose of fluphenazine hydrochloride is eliminated in about 20 hours, the elimination of the decanoate ester following a depot intramuscular injection has a nominal half-life of 7 to 10 days, although the overall clearance of fluphenazine decanoate and normalization of hyperprolactinemia following repeated dosing can require 6 to 8 months (Sampath et al., 1992). The main routes of metabolism of the antipsychotic drugs are oxidative processes mediated largely by genetically controlled hepatic cytochrome-P450 (CYP) microsomal oxidases and by conjugation processes. Hydrophilic metabolites of these drugs are excreted in the urine and, to some extent, in the bile. Most oxidized metabolites of antipsychotic drugs are biologically inactive, but a few are not (notably, 7-hydroxychlorpromazine, mesoridazine, and several N-demethylated metabolites of phenothiazines as well as 9-hydroxyrisperidone) and may contribute to the biological activity of the parent substance as well as complicate the problem of correlating assays of drug in blood with clinical effects. The less potent antipsychotic drugs may weakly induce their own hepatic metabolism, since concentrations of chlorpromazine and other phenothiazines in blood are lower after several weeks of treatment with the same dosage; it also is possible that alterations of gastrointestinal motility are partially responsible. The fetus, the infant, and the elderly have diminished capacity to metabolize and eliminate antipsychotic agents, but children tend to

metabolize these drugs more rapidly than do adults (Kutcher, 1997). Bioavailability of several antipsychotic agents is somewhat increased by the use of liquid concentrates. Peak serum concentrations of chlorpromazine and other phenothiazines are attained within 2 to 4 hours. Their intramuscular administration avoids much of the first-pass metabolism in the liver (and possibly also the gut) and provides measurable concentrations in plasma within 15 to 30 minutes. Bioavailability of chlorpromazine may be increased up to tenfold with injections, but the clinical dose usually is decreased by three- to fourfold. Gastrointestinal absorption of chlorpromazine is modified unpredictably by food and probably is decreased by antacids. Concurrent administration of anticholinergic antiparkinsonian agents probably does not appreciably diminish the intestinal absorption of neuroleptic agents (Simpson et al., 1980). Chlorpromazine and other antipsychotic agents bind significantly to membranes and to plasma proteins. Typically, more than 85% of the drug in plasma is bound to albumin. Concentrations of some neuroleptics (e.g., haloperidol) in brain can be more than ten times those in the blood (Tsuneizumi et al., 1992), and their apparent volume of distribution may be as high as 20 liters per kilogram. Disappearance of chlorpromazine from plasma includes a rapid distribution phase (half-life about 2 hours) and a slower elimination phase (half-life about 30 hours), but markedly variable values have been reported; the half-life of elimination from human brain is not known but may be determined using modern brain-scanning technologies (Sedvall, 1992). Approximate elimination half-life of commonly clinically employed antipsychotic agents is provided in Table 203. Attempts to correlate plasma concentrations of chlorpromazine or its metabolites with clinical responses have not been successful (seeBaldessarini et al., 1988; Cooper et al., 1976). Studies have revealed wide variations (at least tenfold) in plasma concentrations among individuals. Although it appears that plasma concentrations of chlorpromazine below 30 ng/ml are not likely to produce an adequate antipsychotic response and that levels above 750 ng/ml are likely to be associated with unacceptable toxicity (seeRivera-Calimlim and Hershey, 1984), it is not yet possible to state with confidence the concentrations in plasma that are associated with optimal clinical responses. At least 10 or 12 metabolites of chlorpromazine occur in human beings in appreciable quantities (Morselli, 1977). Quantitatively, the most important of these are nor2-chlorpromazine (doubly demethylated), chlorophenothiazine (removal of the entire side chain), methoxy and hydroxy products, and glucuronide conjugates of the hydroxylated compounds. In the urine, 7-hydroxylated and N-dealkylated (nor 2) metabolites and their conjugates predominate. Chlorpromazine and other phenothiazines are metabolized extensively through CYP2D6. The pharmacokinetics and metabolism of thioridazine and fluphenazine are similar to those of chlorpromazine, but the strong anticholinergic action of thioridazine on the gut may modify its own absorption. Major metabolites of thioridazine and fluphenazine include N-demethylated, ringhydroxylated, and S-oxidized products (Neumeyer and Booth, 2001). Concentrations of thioridazine in plasma are relatively high (hundreds of nanograms per milliliter), possibly because of its relative hydrophilicity. Thioridazine is prominently converted to the active product mesoridazine, a drug in its own right, and probably an important contributor to the neuroleptic activity of thioridazine. The biotransformation of the thioxanthenes is similar to that of the phenothiazines except that metabolism to sulfoxides is common and ring-hydroxylated products are uncommon. Piperazine derivatives of the phenothiazines and thioxanthenes also are handled much like chlorpromazine, although metabolism of the piperidine ring itself occurs. Elimination of haloperidol and chemically related agents from human plasma is not a log-linear

function, and the apparent half-life increases with time, with a very prolonged terminal half-life of approximately 1 week (Cohen et al., 1992). Haloperidol and other butyrophenones are metabolized primarily by an N-dealkylation reaction; the resultant inactive fragments can be conjugated with glucuronic acid. The metabolites of haloperidol are inactive, with the possible exception of a hydroxylated product formed by reduction of the keto moiety that may be reoxidized to haloperidol (Korpi et al., 1983). A potentially neurotoxic derivative of haloperidol, a substituted phenylpiperidine, analogous to the parkinsonism-inducing agent methylphenyltetrahydropyridine (MPTP), has been described and found in nanomolar quantities in postmortem brain tissue of persons who had been treated with haloperidol (Eyles et al., 1997; Castagnoli et al., 1999). Typical plasma concentrations of haloperidol encountered clinically are about 5 to 20 ng/ml, and these correspond to 80% to 90% occupancy of D2-dopamine receptors in human basal ganglia, as demonstrated by PET brain scanning (Baldessarini et al., 1988; Wolkin et al., 1989). Typical peak serum concentrations of clozapine after a single oral dose of 200 mg (100 to 770 ng/ml) are reached at 2.5 hours after administration, and typical serum levels during treatment are about 300 to 500 nanograms per milliliter. Clozapine is metabolized preferentially by CYP3A4 into pharmacologically inactive demethylated, hydroxylated, and N-oxide derivatives before excretion in urine and feces. The elimination half-life of clozapine varies with dose and dosing frequency, but average about 12 hours (seeTable 203). Risperidone is well absorbed, and it is metabolized in the liver preferentially by isozyme CYP2D6 to a major and active circulating metabolite, 9-hydroxyrisperidone. Since this metabolite and risperidone are nearly equipotent, the clinical efficacy of the drug reflects both compounds. Following oral administration of risperidone, peak plasma concentrations of risperidone and of its 9-hydroxy metabolite occur at 1 and 3 hours, respectively. The mean half-life of both compounds is about 22 hours (Table 203). Olanzapine is also well absorbed, but about 40% of an oral dose is metabolized before reaching the systemic circulation. Plasma concentrations of olanzapine peak at about 6 hours after oral administration, and its elimination half-life ranges from 20 to 54 hours (Table 203). The major, readily excreted metabolites of olanzapine are the inactive 10-N-glucuronide and 4'-nor derivatives, formed mainly by the action of CYP1A2 with CYP2D6 as a minor alternative pathway (United States Pharmacopoeia, 2000). Quetiapine fumarate is readily absorbed after oral administration and reaches peak plasma levels after 1.5 hours, with a mean half-life of 6 hours (Table 203). It is highly metabolized by hepatic CYP3A4 to inactive and readily excreted sulfoxide and acidic derivatives (United States Pharmacopoeia, 2000). Tolerance and Physical Dependence The antipsychotic drugs are not addicting, as the term is defined in Chapter 24: Drug Addiction and Drug Abuse. However, some degree of physical dependence may occur, with malaise and difficulty in sleeping developing several days after their abrupt discontinuation. Tolerance usually develops to the sedative effects of neuroleptics over a period of days or weeks. Tolerance to antipsychotic drugs and cross-tolerance among the agents also are demonstrable in behavioral and biochemical experiments in animals, particularly those directed toward evaluation of the blockade of dopaminergic receptors in the basal ganglia (seeBaldessarini and Tarsy, 1979). This form of tolerance may be less prominent in limbic and cortical areas of the forebrain. One correlate of tolerance in forebrain dopaminergic systems is the development of disuse supersensitivity of

those systems, probably mediated by changes in the receptors for the neurotransmitter. This mechanism may underlie the clinical phenomenon of withdrawal-emergent dyskinesias (choreoathetosis on abrupt discontinuation of antipsychotic agents, especially following prolonged use of high doses of potent agents) (Baldessarini et al., 1980). Although cross-tolerance for some effects may occur among neuroleptic drugs, clinical problems occur in making rapid changes from high doses of one type of agent to another; sedation, hypotension, and other autonomic effects or acute extrapyramidal reactions can result. Worsening of the clinical condition that routinely follows discontinuation of maintenance treatment with antipsychotic agents appears to be dependent on the rate of drug discontinuation (Viguera et al., 1997). Clinical worsening of psychotic symptoms is particularly likely after rapid discontinuation of clozapine, and it is difficult to control with alternative antipsychotics (Baldessarini et al., 1997). Preparations and Dosage The number of agents with known neuroleptic or antipsychotic effects is large. Table 201 summarizes only those that are currently marketed in the United States for the treatment of psychotic disorders. Several available agents are excluded, such as promazine hydrochloride (SPARINE) and reserpine and other rauwolfia alkaloids that have inferior antipsychotic effects or that are no longer commonly used for psychiatric patients. Prochlorperazine (COMPAZINE ) has questionable utility as an antipsychotic agent and frequently produces acute extrapyramidal reactions; it is thus not commonly employed in psychiatry, although it is used as an antiemetic. Thiethylperazine (TORECAN ), marketed only as an antiemetic, is a potent dopaminergic antagonist with many neuroleptic-like properties; at high doses it may be an efficacious antipsychotic agent (Rotrosen et al., 1978). Several other thioxanthenes, butyrophenones, diphenylbutylpiperidines, benzamides, and long-acting repository preparations of neuroleptic agents are available in other countries. Toxic Reactions and Side Effects Antipsychotic drugs have a high therapeutic index and generally are safe agents. Furthermore, most phenothiazines and haloperidol have a relatively flat doseresponse curve and can be used over a wide range of dosages. Although occasional deaths from overdosage have been reported, this is rare if the patient is given medical care and if an overdosage is not complicated by the concurrent ingestion of alcohol or other drugs. Based on animal data, the therapeutic index is lower for thioridazine and chlorpromazine than for the more potent agents (Janssen and Van Bever, 1978). Adult patients have survived doses of chlorpromazine up to 10 grams, and deaths from an overdose of haloperidol alone appear to be unknown, although the neuroleptic malignant syndrome and dystonic reactions that compromise respiration can be lethal. Side effects often are extensions of the many pharmacological actions of these drugs. The most important are those on the cardiovascular system, central and autonomic nervous systems, and on endocrine functions. Other dangerous effects are seizures, agranulocytosis, cardiac toxicity, and pigmentary degeneration of the retina, all of which are rare (see below). Therapeutic doses of phenothiazines may cause faintness, palpitation, and anticholinergic effects including nasal stuffiness, dry mouth, blurred vision, constipation, and, in males with prostatism, urinary retention. The most common troublesome cardiovascular side effect is orthostatic hypotension, which may result in syncope and falls. A fall in blood pressure is most likely to occur from administration of the phenothiazines with aliphatic side chains and of the atypical

antipsychotics. Potent neuroleptic agents generally produce less hypotension. Neurological Side Effects A variety of neurological syndromes, involving particularly the extrapyramidal motor system, occur following the use of almost all antipsychotic drugs. These reactions are particularly prominent during treatment with the high-potency agents (tricyclic piperazines and butyrophenones). There is less likelihood of acute extrapyramidal side effects with clozapine, quetiapine, olanzapine, thioridazine, or low doses of risperidone. The neurological effects associated with antipsychotic drugs have been reviewed in detail (Baldessarini and Tarsy, 1979; Baldessarini et al., 1980; Baldessarini, 1984; Baldessarini et al., 1990; Kane et al., 1992; Tarsy et al., 2001). Six varieties of neurological syndromes are characteristic of antipsychotic drugs. Four of these (acute dystonia, akathisia, parkinsonism, and the rare neuroleptic malignant syndrome) usually appear soon after administration of the drug, and two (rare perioral tremor and tardive dyskinesias or dystonias) are late-appearing syndromes that evolve during prolonged treatment. The clinical features of these syndromes and guidelines for their management are summarized in Table 204. Acute dystonic reactions commonly occur with the initiation of antipsychotic drug therapy, particularly with agents of high potency, and may present as facial grimacing, torticollis, or oculogyric crisis. These syndromes may be mistaken for hysterical reactions or seizures, but they respond dramatically to parenteral administration of anticholinergic antiparkinsonian drugs. Oral administration of anticholinergic agents also can prevent dystonia, particularly in young male patients who have been given a high-potency neuroleptic drug (Arana et al., 1988). Although treated readily, acute dystonic reactions are terrifying to patients; sudden death has occurred in rare instances, perhaps due to the impaired respiration caused by dystonia of pharyngeal, laryngeal, and other muscles. Akathisia refers to strong subjective feelings of distress or discomfort, often referred to the legs, as well as to a compelling need to be in constant movement rather than to follow any specific movement pattern. Patients feel that they must get up and walk or continuously move about and may be unable to keep this tendency under control. Akathisia often is mistaken for agitation in psychotic patients; the distinction is critical, since agitation might be treated with an increase in dosage. Because the response of akathisia to antiparkinsonian drugs frequently is unsatisfactory, treatment typically requires reduction of antipsychotic drug dosage or a change of drug. Antianxiety agents or moderate doses of propranolol may be beneficial (Lipinski et al., 1984). This common syndrome often interferes with the acceptance of neuroleptic treatment but frequently is not diagnosed. A parkinsonian syndrome that may be indistinguishable from idiopathic parkinsonism commonly develops gradually during administration of antipsychotic drugs. Its incidence varies with different agents (seeTables 201 and 204). Clinically, there is a generalized slowing of volitional movement (akinesia) with mask facies and a reduction in arm movements. The syndrome characteristically evolves gradually over days to weeks. The most noticeable signs are slowing of movements, sometimes rigidity and variable tremor at rest, especially involving the upper extremities. "Pillrolling" movements may be seen, although they are not as prominent in neuroleptic-induced as in idiopathic parkinsonism. Parkinsonian side effects may be mistaken for depression, since the flat facial expression and retarded movements may resemble signs of depression. This reaction usually is managed by use of either antiparkinsonian agents with anticholinergic properties or amantadine (seeChapter 22: Treatment of Central Nervous System Degenerative Disorders); the use of levodopa or a directly acting dopamine agonist incurs the risk of inducing agitation and worsening the

psychotic illness. Antipsychotic agents sometimes are required in the clinical management of patients with idiopathic Parkinson's disease with spontaneous psychotic illness or psychotic reactions to dopaminergic therapy (seeChapter 22: Treatment of Central Nervous System Degenerative Disorders); clozapine and perhaps quetiapine are least likely to worsen the neurological disorder itself (Menza et al., 1999; Parkinson Study Group, 1999). A rare disorder, neuroleptic malignant syndrome, resembles a very severe form of parkinsonism with coarse tremor and catatonia, fluctuating in intensity, as well as signs of autonomic instability (labile pulse and blood pressure, hyperthermia), stupor, elevation of creatine kinase in serum, and sometimes myoglobinemia. In its most severe form, this syndrome may persist for more than a week after stopping the offending agent. Because mortality is high (more than 10%), immediate medical attention is required. This reaction has been associated with various types of neuroleptics, but its prevalence may be greater when relatively high doses of the more potent agents are used, especially when they are administered parenterally. Aside from immediate cessation of neuroleptic treatment and provision of supportive care, specific treatment is unsatisfactory; administration of dantrolene or the dopaminergic agonist bromocriptine may be helpful (Addonizio et al., 1987; Pearlman, 1986). Although dantrolene also is used to manage the syndrome of malignant hyperthermia induced by general anesthetics, the neuroleptic-induced form of catatonia and hyperthermia probably is not associated with a defect in Ca2+ metabolism in skeletal muscle. A rare movement disorder that can appear late in the treatment of chronically ill patients with antipsychotic agents is perioral tremor, often referred to as the "rabbit syndrome" (Jus et al., 1974) because of the peculiar movements that characterize this condition. While sometimes categorized with other tardive (late or slowly evolving) dyskinesias, this term usually is reserved for choreoathetotic or dystonic reactions that develop after prolonged therapy. The rabbit syndrome, in fact, shares many features with parkinsonism, because the tremor has a frequency of about 5 to 7 Hz and there is a favorable response to anticholinergic agents and to the removal of the offending agent. Tardive dyskinesia is a late-appearing neurological syndrome (or syndromes) associated with the use of neuroleptic drugs. It occurs more frequently in older patients, and risk may be greater in patients with mood disorders than in those with schizophrenia. Prevalence averages 15% to 25% in chronically psychotic young adults, with an annual incidence of 3% to 5% and a somewhat smaller annual rate of spontaneous remission, even with continued neuroleptic treatment. The risk is much lower with clozapine, but that of other recently developed atypical antipsychotic agents is not established (Tarsy et al., 2001). Tardive dyskinesia is characterized by stereotyped, repetitive, painless, involuntary, quick choreiform (tic-like) movements of the face, eyelids (blinks or spasm), mouth (grimaces), tongue, extremities, or trunk. There are varying degrees of slower athetosis (twisting movements) and sustained dystonic postures, which are more common in young men and may be disabling. Late (tardive) emergence of possibly related disorders marked mainly by dystonia or akathisia (restlessness) also are seen. These movements all disappear in sleep (as in many other extrapyramidal syndromes), vary in intensity over time, and are dependent on the level of arousal or emotional distress. Tardive dyskinetic movements can be suppressed partially by use of a potent neuroleptic, and perhaps with a dopamine-depleting agent such as reserpine or tetrabenazine, but such interventions are reserved for compellingly severe dyskinesia, particularly with continuing psychosis. Some dyskinetic patients, typically those with dystonic features, may benefit from use of clozapine, with which the risk of tardive dyskinesia is very low. Symptoms sometimes persist indefinitely after discontinuation of neuroleptic medication; more often, they diminish or disappear gradually over months of follow-up and are most likely to resolve spontaneously in younger patients (Gardos et al.,

1994; Morgenstern and Glazer, 1993; Smith and Baldessarini, 1980). Antiparkinsonism agents typically have little effect on, or may exacerbate, tardive dyskinesia and other forms of choreoathetosis, such as in Huntington's disease; no adequate treatment of these conditions has yet been established (Adler et al., 1999; Soares and McGrath, 1999). There is no established neuropathology in tardive dyskinesia, and its pathophysiological basis remains obscure. It has been hypothesized that compensatory increases in the function of dopamine as a neurotransmitter in the basal ganglia may be involved, including increased abundance and sensitivity of dopamine D2-like receptors resulting from long-term administration of neuroleptic drugs (Baldessarini and Tarsy, 1979; Tarazi et al., 1997). This idea is supported by the dissimilarities of therapeutic responses in patients with Parkinson's disease and those with tardive dyskinesia and by the similarities in responses of patients with other choreoathetotic dyskinesias such as Huntington's disease (seeChapter 22: Treatment of Central Nervous System Degenerative Disorders). Thus, antidopaminergic drugs tend to suppress the manifestations of tardive dyskinesia or Huntington's disease, while dopaminergic agonists worsen these conditions; in contrast to parkinsonism, antimuscarinic agents tend to worsen tardive dyskinesia, but cholinergic agents usually are ineffective. Because supersensitivity to dopaminergic agonists tends not to persist for more than a few weeks after stopping exposure to antagonists of the transmitter, this phenomenon is most likely to play a role in variants of tardive dyskinesia that resolve rapidly; these usually are referred to as withdrawal-emergent dyskinesias. The theoretical and clinical aspects of this problem have been reviewed in detail elsewhere (Baldessarini and Tarsy, 1979; Baldessarini et al., 1980; Kane et al., 1992). It is important to prevent the neurological syndromes that complicate the use of antipsychotic drugs. Certain therapeutic guidelines should be followed. Routine use of antiparkinsonian agents in an attempt to avoid early extrapyramidal reactions usually is unnecessary and adds complexity, side effects, and expense to the treatment regimen. Antiparkinsonian agents are best reserved for cases of overt extrapyramidal reactions that respond favorably to such intervention. The need for such agents for the treatment of acute dystonic reactions ordinarily diminishes with time, but parkinsonism and akathisia tend to persist. The thoughtful and conservative use of antipsychotic drugs in patients with chronic or frequently recurrent psychotic disorders almost certainly can reduce the risk of tardive dyskinesia. Although reduction of the dose of an antipsychotic agent is the best way to minimize its neurological side effects, this may not be practical in a patient with uncontrollable psychotic illness. The best preventive practice is to use the minimum effective dose of an antipsychotic drug for long-term therapy and to discontinue treatment as soon as it seems reasonable to do so or if a satisfactory response cannot be obtained. The use of clozapine, quetiapine, and other novel antipsychotic agents with a low risk of inducing extrapyramidal side effects represents an alternative for some patients, particularly those with continuing psychotic symptoms plus dyskinesia (Baldessarini and Frankenburg, 1991). Jaundice Jaundice was observed in patients shortly after the introduction of chlorpromazine. Commonly occurring during the second to fourth week of therapy, the jaundice generally is mild, and pruritus is rare. The reaction is probably a manifestation of hypersensitivity, because eosinophilic infiltration of the liver as well as eosinophilia occur, and there is no correlation with dose. Desensitization to chlorpromazine may occur with repeated administration, and jaundice may or may not recur if the same neuroleptic agent is given again. When the psychiatric disorder calls for uninterrupted drug therapy for a patient with neuroleptic-induced jaundice, it probably is safest to use low doses of a potent, dissimilar agent.

Blood Dyscrasias Mild leukocytosis, leukopenia, and eosinophilia occasionally occur with antipsychotic medications, particularly with clozapine and less often with low-potency phenothiazines. It is difficult to determine whether a leukopenia occurring during the administration of a phenothiazine is a forewarning of impending agranulocytosis. This serious but rare complication occurs in not more than 1 in 10,000 patients receiving chlorpromazine or other low-potency agents other than clozapine; it usually appears within the first 8 to 12 weeks of treatment (Alvir et al., 1993). Suppression of the bone marrow or, less commonly, agranulocytosis has been associated particularly with the use of clozapine; the incidence approaches 1% within several months of treatment, independent of dose, and close monitoring of the patient is required for its safe use. Because the onset of blood dyscrasia may be sudden, the appearance of fever, malaise, or apparent respiratory infection in a patient being treated with an antipsychotic drug should be followed immediately by a complete blood count. Risk of agranulocytosis has been greatly reduced, though not eliminated, by frequent white blood cell counts in patients being treated with clozapine. Other Metabolic Effects Weight gain and its common long-term complications commonly are associated with long-term treatment with most antipsychotic and antimanic drugs. Among newer antipsychotic agents, weight gain is especially prominent with clozapine and olanzapine and less so with risperidone and quetiapine (Allison et al., 1999). Associated adverse responses include new-onset or worsening of type 2 diabetes mellitus, hypertension, and hyperlipidemia. The anticipated long-term public health impact of these emerging problems is not yet well defined (Gaulin et al., 1999; Wirshing et al., 1998). In some patients with morbid increases in weight, the airway may be compromised, especially during sleep. Skin Reactions Dermatological reactions to the phenothiazines are common. Urticaria or dermatitis occurs in about 5% of patients receiving chlorpromazine. Several types of skin disorders may occur. Hypersensitivity reactions that may be urticarial, maculopapular, petechial, or edematous usually occur between the first and eighth weeks of treatment. The skin clears after discontinuation of the drug and may remain so even if drug therapy is reinstituted. Contact dermatitis may occur in personnel who handle chlorpromazine, and there may be a degree of cross-sensitivity to the other phenothiazines. Photosensitivity occurs that resembles severe sunburn. An effective sunscreen preparation should be prescribed for outpatients being treated with phenothiazines during the summer. Gray-blue pigmentation induced by long-term administration of low-potency phenothiazines in high doses is rare with current practices. Epithelial keratopathy often is observed in patients on long-term therapy with chlorpromazine, and opacities in the cornea and in the lens of the eye also have been noted. The deposits tend to disappear spontaneously, although slowly, following discontinuation of drug administration. Pigmentary retinopathy has been reported, particularly following doses of thioridazine in excess of 1000 mg per day; a maximum daily dose of 800 mg currently is recommended. Interactions with Other Drugs The phenothiazines and thioxanthenes, especially those of low potency, affect the actions of a number of other drugs, sometimes with important clinical consequences (seeDeVane and Nemeroff,

2000; Goff and Baldessarini, 1993). Chlorpromazine originally was introduced to potentiate central depressants in anesthesiology. Antipsychotic drugs can strongly potentiate sedatives and analgesics prescribed for medical purposes, as well as alcohol, nonprescription sedatives and hypnotics, antihistamines, and cold remedies. Chlorpromazine increases the miotic and sedative effects of morphine and may increase its analgesic actions. Furthermore, the drug markedly increases the respiratory depression produced by meperidine and can be expected to have similar effects when administered concurrently with other opioids. Obviously, neuroleptic drugs inhibit the actions of dopaminergic agonists and of levodopa. Other interactive effects can be manifest on the cardiovascular system. Chlorpromazine and some other antipsychotic drugs, as well as their N-demethylated metabolites, may block the antihypertensive effects of guanethidine, probably by blocking its uptake into sympathetic nerves. The more potent antipsychotic agents, as well as molindone, are less likely to cause this effect. Low-potency phenothiazines can promote postural hypotension, possibly due to their -adrenergic blocking properties. Thus, the interaction between phenothiazines and antihypertensive agents can be unpredictable. Thioridazine, pimozide, and the experimental agents sertindole and ziprasidone can exert quinidinelike cardiac depressant effects, which can cause myocardial depression, decreased efficiency of repolarization, and increased risk of tachyarrhythmias. These effects may partially nullify the inotropic effect of digitalis. The antimuscarinic action of clozapine and thioridazine can cause tachycardia and enhance the peripheral and central effects (confusion, delirium) of other anticholinergic agents, such as the tricyclic antidepressants and antiparkinsonian agents. Sedatives or anticonvulsants (e.g., carbamazepine, phenobarbital, and phenytoin but not valproate) that induce microsomal drug-metabolizing enzymes can enhance the metabolism of antipsychotic agents, sometimes with significant clinical consequences. Conversely, serotonin-reuptake inhibitors including fluoxetine (seeChapter 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders) compete for hepatic oxidases and can elevate circulating levels of neuroleptics (Goff and Baldessarini, 1993). Drug Treatment of Psychoses The antipsychotic drugs are not specific for the type of psychosis to be treated. They are clearly effective in acute psychoses of unknown etiology, including mania, acute idiopathic psychoses, and acute exacerbations of schizophrenia; the greatest amount of controlled clinical data exists for the acute and chronic phases of schizophrenia and in acute mania. In addition, antipsychotic drugs are used empirically in many other neuromedical and idiopathic disorders in which psychotic symptoms and severe agitation are prominent. The fact that neuroleptic agents are indeed antipsychotic was slow to gain acceptance. However, many clinical trials and five decades of clinical experience have established that these agents are effective and superior to sedatives such as the barbiturates and benzodiazepines, or alternatives such as electroconvulsive shock or other medical or psychological therapies (seeBaldessarini, 1984, 1985). The "target" symptoms for which antipsychotic agents seem to be especially effective include agitation, combativeness, hostility, hallucinations, acute delusions, insomnia, anorexia, poor self-care, negativism, and sometimes withdrawal and seclusiveness; more variable or delayed are improvements in motivation and cognitive functions including insight, judgment, memory, and orientation. The most favorable prognosis is for patients with acute illnesses of brief duration who had functioned relatively well prior to the illness.

Despite the great success of the antipsychotic drugs, their use alone does not constitute optimal care of psychotic patients. The acute care, protection, and support of acutely psychotic patients, as well as mastery of techniques employed in their long-term care and rehabilitation, also are of critical importance. Detailed reviews of the clinical use of antipsychotic drugs are available (Baldessarini, 1984; Marder, 1998). No one drug or combination of drugs has a selective effect on a particular symptom complex in groups of psychotic patients; although individual patients may appear to do better with one agent than another, this can be determined only by trial and error. Certain agents (particularly newer antipsychotic drugs) have been claimed to be specifically effective against "negative" symptoms in psychotic disorders (abulia, social withdrawal, lack of motivation), but evidence supporting this proposal remains inconsistent, and such benefits usually are limited (Moller, 1999). Generally, "positive" (irrational thinking, delusions, agitated turmoil, hallucinations) and negative symptoms tend to respond or not respond together. This trend is well documented with typical neuroleptics as well as modern atypical antipsychotic agents. It is clear that clozapine and other modern atypical antipsychotics induce less bradykinesia and other parkinsonian effects than do typical neuroleptics. Minimizing such side effects is sometimes interpreted clinically as a beneficial effect on impoverished affective responsiveness. It is important to simplify the treatment regimen and to ensure that the patient is receiving the drug. In cases of suspected severe and dangerous noncompliance or with failure of oral treatment, the patient can be treated with injections of fluphenazine decanoate, haloperidol decanoate, or other long-acting preparations. Injectable and long-acting preparations of modern atypical antipsychotics currently are unavailable, but some are in development. Because the choice of an antipsychotic drug cannot be made reliably on the basis of anticipated therapeutic effect, drug selection often depends on side effects or on a previous favorable response. If the patient has a history of cardiovascular disease or stroke and the threat from hypotension is serious, a potent neuroleptic should be used in the smallest dose that is effective (seeTable 201; DeBattista and Schatzberg, 1999). If it seems important to minimize the risk of acute extrapyramidal symptoms, quetiapine, a low dose of olanzapine or risperidone, or clozapine should be considered. If the patient would be seriously discomforted by interference with ejaculation or if there are serious risks of cardiovascular or other autonomic toxicity, low doses of a potent neuroleptic might be preferred. If sedative effects are undesirable, a potent agent is preferable. Small doses of antipsychotic drugs of high or moderate potency may be safest in the elderly. If the patient has compromised hepatic function or if there is a potential threat of jaundice, low doses of a high-potency agent may be used. The physician's experience with a particular drug may outweigh other considerations. Skill in the use of antipsychotic drugs depends on selection of an adequate but not excessive dose, knowledge of what to expect, and judgment as to when to stop therapy or change drugs. Some patients do not respond satisfactorily to antipsychotic drug treatment, and many chronically ill schizophrenic patients, while helped during periods of acute exacerbation of their disease, may show unsatisfactory responses between the more acute phases of illness. Individual nonresponders cannot be identified beforehand with certainty, and a minority of patients do poorly or sometimes even become worse on medication. If a patient does not improve after a course of seemingly adequate treatment and fails to respond to another drug given in adequate dosage, the diagnosis should be reevaluated. Usually 2 to 3 weeks or more are required to demonstrate obvious positive effects in schizophrenic patients. Maximum benefit in chronically ill patients may require several months. In contrast,

improvement of some acutely psychotic patients can be seen within 48 hours. Aggressive dosing or parenteral administration of an antipsychotic drug at the start of an acute psychosis has not been found to increase the magnitude or the rate of appearance of therapeutic responses (Baldessarini et al., 1988). Sedatives, such as the potent benzodiazepines, can be used for brief periods during the initiation of antipsychotic therapy but are not effective in the long-term treatment of chronically psychotic and, especially, schizophrenic patients. After the initial response, drugs usually are used in conjunction with psychological, supportive, and rehabilitative treatments. There is no convincing evidence that combinations of antipsychotic drugs offer consistent advantages. A combination of an antipsychotic drug and an antidepressant may be useful in some cases, especially in depressed psychotic patients or in cases of agitated major depression with psychotic features. However, antidepressants and stimulants are unlikely to reduce apathy and withdrawal in schizophrenia, and they may induce clinical worsening in some cases. Optimal dosage of antipsychotic drugs requires individualization to determine doses that are effective, well tolerated, and accepted by the patient. Doseresponse relationships for antipsychotic and side effects overlap, and an end point of a desired therapeutic response can be difficult to determine (DeBattista and Schatzberg, 1999). Typical effective doses are approximately 300 to 500 mg of chlorpromazine, 5 to 15 mg of haloperidol, or their equivalent, daily. Doses of as little as 50 to 200 mg of chlorpromazine per day (or 2 to 6 mg of haloperidol or fluphenazine per day) may be effective and be better tolerated by many patients, especially after the initial improvement of acute symptoms (Baldessarini et al., 1988). Careful observation of the patient's changing response is the best guide to dosage. In the treatment of acute psychoses, the dose of antipsychotic drug is increased during the first few days to achieve control of symptoms. The dose is then adjusted during the next several weeks as the patient's condition warrants. Parenteral medication sometimes is indicated for acutely agitated patients; 5 mg of haloperidol or fluphenazine or a comparable dose of another agent is given intramuscularly. The desired response usually can be obtained by administering additional doses at intervals of 4 to 8 hours for the first 24 to 72 hours, because the appearance of effects may be delayed for several hours. Rarely is it necessary to administer more than 20 to 30 mg of fluphenazine or haloperidol (or an equivalent amount of another agent) per 24 hours. Severe and otherwise poorly controlled agitation usually can be managed safely by use of adjunctive sedation (e.g., with a benzodiazepine such as lorazepam) and close supervision in a secure setting. One must remain alert for acute dystonic reactions, which are especially likely early in the aggressive use of potent neuroleptics. Hypotension is most likely to occur if an agent of low potency, such as chlorpromazine, is given in a high dose or by injection and may occur with atypical antipsychotic agents early in treatment. Some antipsychotic drugs, including fluphenazine, other piperazines, and haloperidol, have been given in doses of several hundred milligrams a day without disaster, although such high doses of potent agents do not yield significantly or consistently superior results in the treatment of acute or chronic psychosis, and they may yield inferior antipsychotic effects as well as increasing risks of neurological and other side effects (Baldessarini et al., 1988). After an initial period of stabilization, regimens based on a single daily dose (typically 5 to 10 mg per day of haloperidol or fluphenazine, 2 to 4 mg of risperidone, 5 to 15 mg of olanzapine, or their equivalent) often are effective and safe; such dosing may allow some degree of selection of the time at which unwanted effects occur so as to minimize the patient's discomfort. Table 201 gives the usual and extreme ranges of dosage for antipsychotic drugs used in the United States (see alsoDeBattista and Schatzberg, 1999). The ranges have been established for the most part in the treatment of schizophrenic or manic patients. Although acutely disturbed inpatients often

require higher doses of an antipsychotic drug than do more stable outpatients, the concept that a low or flexible maintenance dose will suffice during follow-up care of a partially recovered or chronic psychotic patient is supported by several appropriately controlled trials (Baldessarini et al., 1988; Herz et al., 1991). In reviews of nearly 30 controlled prospective studies involving several thousand schizophrenic patients, the mean overall relapse rate was 58% for those patients who were withdrawn from antipsychotic drugs and given a placebo, compared with only 16% of those who continued on drug therapy (Baldessarini et al., 1990; Gilbert et al., 1995; Viguera et al., 1997). Dosage in chronic cases often can be lowered to 50 to 200 mg of chlorpromazine (or its equivalent) per day without signs of relapse (Baldessarini et al., 1988), but rapid dose reduction or discontinuation appears to increase risk of exacerbation or relapse (Viguera et al., 1997). Flexible therapy in which dosage is adjusted to changing current requirements can be useful and can reduce the incidence of side effects. Maintenance with injections of the decanoate ester of fluphenazine or haloperidol every 2 to 4 weeks can be very effective (Kane et al., 1983). The treatment of delirium or dementia is another accepted use of the antipsychotic drugs. They may be administered temporarily while a specific and correctable structural, infectious, metabolic, or toxic cause is vigorously sought. They sometimes are used for prolonged periods when no correctable cause can be found. Once again, there are no drugs of choice or clearly established dosage guidelines for such indications, although agents of high potency are preferred (seePrien, 1973). In patients with acute "brain syndromes" without likelihood of seizures, frequent small doses (e.g., 2 to 6 mg) of haloperidol or another potent antipsychotic may be effective in controlling agitation. Agents with low potency should be avoided because of their greater tendency to produce sedation, hypotension, and seizures, and those with central anticholinergic effects may worsen confusion and agitation. Most antipsychotics are effective in the treatment of mania and often are used concomitantly with the institution of lithium or anticonvulsant therapy (see below). In fact, it often is impractical to attempt to manage a manic patient with lithium alone during the first week of illness, when antipsychotic or sedative drugs usually are required. Adequate studies of possible long-term preventive effects of antipsychotic drugs in manic-depressive illness have not been conducted. Antipsychotic drugs also may have a limited role in the treatment of severe depression. Controlled studies have demonstrated the efficacy of several antipsychotic drugs in some depressed patients, especially those with striking agitation or psychotic delusions, and addition of an antipsychotic to an antidepressant in psychotic depression may yield results approaching those obtained with ECT (Brotman et al., 1987; Chan et al., 1987). Antipsychotic agents ordinarily are not used for the treatment of anxiety disorders. The status of the drug treatment of childhood psychosis and other behavioral disorders of children is confused by diagnostic inconsistencies and a paucity of controlled studies. Antipsychotics can benefit children with disorders characterized by features that occur in adult psychoses or mania as well as those with Tourette's syndrome. Low doses of the more potent agents usually are preferred in an attempt to avoid interference with daytime activities or performance in school (Kutcher, 1997; Findling et al., 1998). Attention disorder, with or without hyperactivity, responds poorly to antipsychotic agents but often very well to stimulants and some antidepressants (Kutcher, 1997). Information on dosages of antipsychotic drugs for children is limited, as is the number of drugs currently approved in the United States for use in preadolescents. The recommended doses of antipsychotic agents for school-aged children with moderate degrees of agitation are lower than those for acutely psychotic children, who may require daily doses similar to those used in adults (Kutcher, 1997; see alsoTable 201).

Most relevant experience is with chlorpromazine, for which the recommended single dose is approximately 0.5 mg/kg of body weight given at intervals of 4 to 6 hours orally or 6 to 8 hours intramuscularly. Suggested dosage limits are 200 mg per day (orally) for preadolescents, 75 mg per day (intramuscularly) for children aged 5 to 12 years or weighing 23 to 45 kg, and 40 mg per day (intramuscularly) for children under 5 years of age or weighing less than 23 kg. Usual single doses for other agents of relatively low potency are thioridazine, 0.25 to 0.5 mg/kg, and chlorprothixene, 0.5 to 1.0 mg/kg, to a total of 100 mg/day (over the age of 6). For neuroleptics of high potency, daily doses are trifluoperazine, 1 to 15 mg (6 to 12 years of age) and 1 to 30 mg (over 12 years of age); fluphenazine, 0.05 to 0.10 mg/kg, up to 10 mg (over 5 years of age); and perphenazine, 0.05 to 0.10 mg/kg, up to 6 mg (over 1 year of age). Haloperidol and pimozide have been used in children, especially for Tourette's syndrome; haloperidol is recommended for use in a dosage of 2 to 16 mg per day in children over 12 years of age. Poor tolerance of the side effects of the antipsychotic drugs often limits the dosage that can be given to elderly patients. One should proceed cautiously, using small, divided doses of agents with moderate or high potency, with the expectation that elderly patients will require doses that are onehalf or less of those needed for young adults (Eastham and Jeste, 1997; Jeste et al., 1999a,b; Zubenko and Sunderland, 2000). Miscellaneous Medical Uses for Antipsychotic Drugs Antipsychotic drugs have a variety of uses in addition to the treatment of psychiatric patients. Predominant among these are the treatment of nausea and vomiting, alcoholic hallucinosis, certain neuropsychiatric diseases marked by movement disorders (notably, Tourette's syndrome and Huntington's disease), and occasionally pruritus (for which trimeprazine is recommended) and intractable hiccough. Nausea and Vomiting Many antipsychotic agents can prevent vomiting due to specific etiologies when given in relatively low, nonsedative doses. This use is discussed in Chapter 38: Prokinetic Agents, Antiemetics, and Agents Used in Irritable Bowel Syndrome. Other Neuropsychiatric Disorders Antipsychotic drugs are useful in the management of several syndromes with psychiatric features that also are characterized by movement disorders. These include, in particular, Tourette's syndrome (marked by tics, other involuntary movements, aggressive outbursts, grunts, and vocalizations that frequently are obscene) and Huntington's disease (marked by severe and progressive choreoathetosis, psychiatric symptoms, and dementia, with a clear genetic basis). Haloperidol currently is regarded as a drug of choice for these conditions, although it probably is not unique in its antidyskinetic actions. Pimozide, a diphenylbutylpiperidine, also is used (typically in daily doses of 2 to 10 mg). Pimozide carries some risk of impairing cardiac repolarization, and it should be discontinued if the QT interval exceeds 470 msec, especially in a child. Clonidine and certain antidepressants also may be effective in Tourette's syndrome (Spencer et al., 1993). Clozapine and quetiapine are relatively well tolerated in psychosis arising with dopamine-receptor agonist treatment in Parkinson's disease (Tarsy et al., 2001). Withdrawal Syndromes Antipsychotic drugs are not useful in the management of withdrawal from opioids, and their use in

the management of withdrawal from barbiturates and other sedatives or alcohol is contraindicated because of the high risk of seizures. They can be used safely and effectively in psychoses associated with chronic alcoholismespecially the syndrome known as alcoholic hallucinosis (seeSadock and Sadock, 2000). Treatment of Mania Antimanic Mood-Stabilizing Agents: Lithium Lithium carbonate was introduced into psychiatry in 1949 for the treatment of mania (Cade, 1949; seeMitchell et al., 1999). However, it was not used for this purpose in the United States until 1970, in part due to concerns of American physicians about the safety of this treatment following reports of severe intoxication with lithium chloride from its uncontrolled use as a substitute for sodium chloride in patients with cardiac disease. Evidence for both the safety and the efficacy of lithium salts in the treatment of mania and the prevention of recurrent attacks of manic-depressive illness is both abundant and convincing (Davis et al., 1999; Mitchell et al., 1999). In recent years, the limitations and side effects of lithium salts have become increasingly well appreciated, and efforts to find alternative antimanic or mood-stabilizing agents have intensified (seeDavis et al., 1999; Goodwin and Jamison, 1990). The most successful alternatives or adjuncts to lithium to date are the anticonvulsants carbamazepine and valproic acid (Post, 2000). History Lithium urate is soluble, and lithium salts were used in the nineteenth century as a treatment of gout. Lithium bromide was employed in that era as a sedative (including its use in manic patients) and as a putative anticonvulsant. Thereafter, lithium salts were little used until the late 1940s, when lithium chloride was employed as a salt substitute for cardiac and other chronically ill patients. This ill-advised use led to several reports of severe intoxication and death and to considerable notoriety concerning lithium salts within the medical profession. Cade, in Australia, while looking for toxic nitrogenous substances in the urine of mental patients for testing in guinea pigs, administered lithium salts to the animals in an attempt to increase the solubility of urates. Lithium carbonate made the animals lethargic, and, in an inductive leap, Cade gave lithium carbonate to several agitated or manic psychiatric patients as early as 1948 (seeMitchell et al., 1999). In 1949, he reported that this treatment seemed to have a specific effect in mania (Cade, 1949). Chemistry Lithium is the lightest of the alkali metals (group Ia); the salts of this monovalent cation share some characteristics with those of Na+ and K+ . Li+ is readily assayed in biological fluids by flamephotometric and atomic-absorption spectrophotometric methods, and it can be detected in brain tissue by magnetic resonance spectroscopy (Riedl et al., 1997). Traces of the ion occur normally in animal tissues, but it has no known physiological role. Lithium carbonate and lithium citrate currently are in therapeutic use in the United States. Pharmacological Properties Therapeutic concentrations of lithium ion (Li+) have almost no discernible psychotropic effects in normal individuals. It is not a sedative, depressant, or euphoriant, and this characteristic differentiates Li+ from other psychotropic agents. The general biology and pharmacology of Li+ have been reviewed in detail elsewhere (Jefferson et al., 1983). The precise mechanism of action of Li+ as a mood-stabilizing agent remains unknown, although many cellular actions of Li+ have been

characterized (Manji et al., 1999b). An important characteristic of Li+ is that it has a relatively small gradient of distribution across biological membranes, unlike Na+ and K+; although it can replace Na+ in supporting a single action potential in a nerve cell, it is not an adequate "substrate" for the Na+ pump and it cannot, therefore, maintain membrane potentials. It is uncertain whether or not important interactions occur between Li+ (at therapeutic concentrations of about 1 mEq per liter) and the transport of other monovalent or divalent cations by nerve cells. Central Nervous System In addition to the possibility of altered distribution of cations in the CNS, much attention has centered on the effects of low concentrations of Li+ on the metabolism of the biogenic monoamines that have been implicated in the pathophysiology of mood disorders as well as on second-messenger and other intracellular molecular mechanisms involved in signal transduction and in cell and gene regulation (Jope, 1999; Lenox and Manji, 1998; Manji et al., 1999a,b). In animal brain tissue, Li+ at concentrations of 1 to 10 mEq per liter inhibits the depolarizationprovoked and Ca2+-dependent release of norepinephrine and dopamine, but not serotonin, from nerve terminals (Baldessarini and Vogt, 1988). Li+ may even enhance the release of serotonin, especially in the limbic system, at least transiently (Treiser et al., 1981; Manji et al., 1999a,b; Wang and Friedman, 1989). The ion has limited effects on catecholamine-sensitive adenylyl cyclase activity or on the binding of ligands to monoamine receptors in brain tissue (Manji et al., 1999b; Turkka et al., 1992), although there is some evidence that Li + can inhibit the effects of receptorblocking agents that cause supersensitivity in such systems (Bloom et al., 1983). Li + can modify some hormonal responses mediated by adenylyl cyclase or phospholipase C in other tissues, including the actions of antidiuretic and thyroid-stimulating hormones on their peripheral target tissues (seeManji et al., 1999b; Urabe et al., 1991). In part, the actions of Li+ may reflect its ability to interfere with the activity of both stimulatory and inhibitory GTP-binding proteins (G s and G i) by keeping them in their less active trimer state (Jope, 1999; Manji et al., 1999b). A consistently reported, selective action of Li + is to inhibit inositol monophosphatase (Berridge et al., 1989) and thus interfere with the phosphatidylinositol pathway (seeFigure 201). This effect can lead to decreases in cerebral inositol concentrations, which can be detected with magnetic resonance spectroscopy in human brain tissue (Manji et al., 1999a,b). However, the physiological consequences of this effect remain uncertain, including interference with neurotransmission mechanisms that are mediated by the phosphatidylinositol pathway (Lenox and Manji, 1998; Manji et al., 1999b). Lithium treatment also leads to consistent decreases in the functioning of protein kinases in brain tissue, including calcium-activated, phospholipid-dependent protein kinase C (PKC) (Jope, 1999; Lenox and Manji, 1998), particularly subtypes and (Manji et al., 1999b). This effect also is shared with valproic acid (particularly for PKC ) but not carbamazepine, among other proposed antimanic or mood-stabilizing agents (Manji et al., 1993). In turn, these effects may alter the release of amine neurotransmitters and hormones (Wang and Friedman, 1989; Zatz and Reisine, 1985) as well as the activity of tyrosine hydroxylase (Chen et al., 1998). A major substrate for cerebral PKC is the myristolated alanine-rich PKC-kinase substrate protein MARCKS, which has been implicated in synaptic and neuronal plasticity. Its expression is reduced by treatment with both Li+ and valproate, but not by carbamazepine or by antipsychotic, antidepressant, or sedative drugs (Watson and Lenox, 1996; Watson et al., 1998). Another important protein kinase that is inhibited by both Li+ and valproate treatment is glycogen synthase kinase-3 (GSK-3 ), which is involved in

neuronal and nuclear regulatory processes, including limiting expression of the regulatory protein catenin (Chen et al., 1999b; Manji et al., 1999b). Li+ and valproic acid both interact with nuclear regulatory factors that affect gene expression. Such effects include increasing DNA binding of transcription-regulatory-factor-activator protein-1 (AP1) as well as altered expression of other transcription regulatory factors, including AMI-1 or PEBP-2 (Chen et al., 1999a,c). Finally, treatment with both Li+ and valproate has been associated with increased expression of the regulatory protein B-cell lymphocyte protein-2 (bcl-2), which is associated with protection against neuronal degeneration (Chen et al., 1999c, Manji et al., 1999c). The significance of these several interactions of mood-stabilizing agents with cell-regulatory factors remains to be clarified. Absorption, Distribution, and Excretion Li+ is absorbed readily and almost completely from the gastrointestinal tract. Complete absorption occurs in about 8 hours, with peak concentrations in plasma occurring 2 to 4 hours after an oral dose. Slow-release preparations of lithium carbonate provide a slower rate of absorption and thereby minimize early peaks in plasma concentrations of the ion. However, absorption can be variable, and the incidence of lower intestinal tract symptoms may be increased. Li+ initially is distributed in the extracellular fluid and then gradually accumulates in various tissues. The concentration gradient across plasma membranes is much smaller than those for Na+ and K+. The final volume of distribution (0.7 to 0.9 liter per kilogram) approaches that of total body water and is much lower than that of most other psychotropic agents, which are lipophilic and protein bound. Passage through the bloodbrain barrier is slow, and when a steady state is achieved, the concentration of Li+ in the cerebrospinal fluid is about 40% to 50% of the concentration in plasma. The ion does not bind appreciably to plasma proteins. The kinetics of Li+ can be monitored in human brain with magnetic resonance spectroscopy (Plenge et al., 1994). Approximately 95% of a single dose of Li+ is eliminated in the urine. From one- to two-thirds of an acute dose is excreted during a 6- to 12-hour initial phase of excretion, followed by slow excretion over the next 10 to 14 days. The elimination half-life averages 20 to 24 hours. With repeated administration, Li+ excretion increases during the first 5 to 6 days until a steady state is reached between ingestion and excretion. When therapy with Li+ is stopped, there is a rapid phase of renal excretion followed by a slow 10- to 14-day phase. Since 80% of the filtered Li+ is reabsorbed by the proximal renal tubules, clearance of Li+ by the kidney is about 20% of that for creatinine, ranging between 15 and 30 ml per minute. This is somewhat lower in elderly patients (10 to 15 ml per minute). Loading with Na+ produces a small enhancement of Li+ excretion, but Na+ depletion promotes a clinically important degree of retention of Li+. Because of the low therapeutic index for Li+ (as low as 2 or 3), concentrations in plasma or serum are determined to assure safe use of the drug. In the treatment of acutely manic patients, one can postpone treatment with Li+ until some degree of behavioral control and metabolic stability has been attained with antipsychotics, sedatives, or anticonvulsants. The concentration of Li+ in blood usually is measured at a trough of the oscillations that result from repetitive administration, but the peaks can be two or three times higher at steady state. When the peaks are reached, intoxication may result, even when concentrations in morning samples of plasma are in the acceptable range of around 1 mEq per liter. Single daily doses, with relatively large oscillations of the plasma concentration of Li+, may reduce the polyuria sometimes associated with this treatment, but the average reduction is quite small (Baldessarini et al., 1996b; Hetmar et al., 1991). Nevertheless, because of the low margin of safety of Li+ and because of its short half-life during initial

distribution, divided daily doses often are used, and even slow-release formulations usually are given twice daily. Nonetheless, some physicians administer Li+ once per day and achieve good therapeutic responses safely. Although the pharmacokinetics of Li+ vary considerably among subjects, the volume of distribution and clearance are relatively stable in an individual patient. However, a well-established regimen can be complicated by occasional periods of Na+ loss, as may occur with an intercurrent medical illness or with losses or restrictions of fluids and electrolytes; heavy sweating may be an exception due to a preferential secretion of Li+ over Na+ in sweat (Jefferson et al., 1982). Hence, patients taking Li+ should have plasma concentrations checked at least occasionally. Most of the renal tubular reabsorption of Li+ seems to occur in the proximal tubule. Nevertheless, Li+ retention can be increased by any diuretic that leads to depletion of Na+, particularly the thiazides (Siegel et al., 1998). Renal excretion can be increased by administration of osmotic diuretics, acetazolamide, or aminophylline, although this is of little help in the management of Li+ intoxication. Triamterene may increase excretion of Li+, suggesting that some reabsorption of the ion may occur in the distal nephron; however, spironolactone does not increase the excretion of Li+ . Some nonsteroidal antiinflammatory agents can facilitate renal proximal tubular resorption of Li+ and thereby increase concentrations in plasma to toxic levels; this interaction appears to be particularly strong with indomethacin; it may occur with ibuprofen and naproxen, and possibly less so with sulindac and aspirin (seeSiegel et al., 1998). Also a potential drugdrug interaction can occur between Li+ and angiotensin converting enzyme inhibitors (seeChapter 31: Renin and Angiotensin). Less than 1% of ingested Li+ leaves the human body in the feces, and 4% to 5% is secreted in sweat. Li+ is secreted in saliva in concentrations about twice those in plasma, while its concentration in tears is about equal to that in plasma. Since the ion also is secreted in human milk, women receiving Li+ should not breast-feed infants. Toxic Reactions and Side Effects The occurrence of toxicity is related to the serum concentration of Li+ and its rate of rise following administration. Acute intoxication is characterized by vomiting, profuse diarrhea, coarse tremor, ataxia, coma, and convulsions. Symptoms of milder toxicity are most likely to occur at the absorptive peak of Li+ and include nausea, vomiting, abdominal pain, diarrhea, sedation, and fine tremor. The more serious effects involve the nervous system and include mental confusion, hyperreflexia, gross tremor, dysarthria, seizures, and cranial-nerve and focal neurological signs, progressing to coma and death; sometimes, neurological damage may be irreversible. Other toxic effects are cardiac arrhythmias, hypotension, and albuminuria. Side effects including nausea, diarrhea, daytime drowsiness, polyuria, polydipsia, weight gain, fine hand tremor, and dermatological reactions including acne are common even in therapeutic dose ranges (Baldessarini et al., 1996b). Therapy with Li+ is associated initially with a transient increase in the excretion of 17hydroxycorticosteroids, Na+ , K+ , and water. This effect usually is not sustained beyond 24 hours. In the subsequent 4 to 5 days, the excretion of K+ becomes normal, Na+ is retained, and in some cases pretibial edema forms. Na+ retention has been associated with increased aldosterone secretion and responds to administration of spironolactone; however, this maneuver incurs the risk of promoting the retention of Li+ and increasing its concentration in plasma. Edema and Na+ retention frequently disappear spontaneously after several days.

A small number of patients treated with Li+ develop a benign, diffuse, nontender thyroid enlargement suggestive of compromised thyroid function. This effect may be associated with previous thyroiditis, particularly in middle-aged women. In patients treated with Li+ , thyroid uptake of 131I is increased, plasma proteinbound iodine and free thyroxine tend to be slightly low, and thyroid-stimulating hormone (TSH) secretion may be moderately elevated. These effects appear to result from interference with the iodination of tyrosine and, therefore, the synthesis of thyroxine. However, patients usually remain euthyroid, and obvious hypothyroidism is rare. In patients who do develop goiter, discontinuation of Li+ or treatment with thyroid hormone results in shrinkage of the gland. Adding supplemental triiodothyronine (T3) to bipolar disorder patients with low-normal thyroid hormone levels and continued depression or anergy may be useful clinically, but proposed use of high doses of thyroxin (T4) to control rapid-cycling bipolar disorder is not established as a safe practice (Bauer and Whybrow, 1990; Baumgartner et al., 1994; Lasser and Baldessarini, 1997). Polydipsia and polyuria occur in patients treated with Li+, occasionally to a disturbing degree. Acquired nephrogenic diabetes insipidus can occur in patients maintained at therapeutic plasma concentrations of the ion (Siegel et al., 1998). Typically, mild polyuria appears early in treatment and then disappears. Late-developing polyuria is an indication to evaluate renal function, lower the dose of Li+ , or consider addition of a thiazide diuretic or a K+ -sparing agent such as amiloride to counteract the polyuria (Batlle et al., 1985; Kosten and Forrest, 1986). The polyuria disappears with termination of Li+ therapy. The mechanism of this effect may involve inhibition of the action of antidiuretic hormone (ADH) on renal adenylyl cyclase as reflected in elevated circulating ADH and lack of responsiveness to exogenous antidiuretic peptides (Boton et al., 1987; Siegel et al., 1998). + The result is decreased ADH stimulation of renal reabsorption of water. However, Li also may act + at steps beyond cyclic AMP synthesis to alter renal function. The effect of Li on water metabolism is not sufficiently predictable to be therapeutically useful in treatment of the syndrome of inappropriate secretion of ADH. Evidence of chronic inflammatory changes in biopsied renal tissue + has been found in a minority of patients given Li for prolonged periods. Since progressive, clinically significant impairment of renal function is rare, these are considered incidental findings by most experts; nevertheless, plasma creatinine and urine volume should be monitored during long-term use of Li+ (Boton et al., 1987; Hetmar et al., 1991). Li also has a weak action on carbohydrate metabolism that resembles that of insulin. In rats, Li causes an increase in skeletal muscle glycogen accompanied by severe depletion of glycogen from the liver. The prolonged use of Li causes a benign and reversible depression of the T wave of the ECG, an + + effect not related to depletion of Na or K . Li+ routinely causes EEG changes characterized by diffuse slowing, widened frequency spectrum, and potentiation with disorganization of background rhythm. Seizures have been reported in nonepileptic patients with plasma concentrations of Li+ in the therapeutic range. Myasthenia gravis may worsen during treatment with Li+ (Neil et al., 1976). A benign, sustained increase in circulating polymorphonuclear leukocytes occurs during the chronic use of Li+ and is reversed within a week after termination of treatment. Allergic reactions such as dermatitis and vasculitis can occur with Li+ administration. Worsening of acne vulgaris is a common problem, and some patients may experience mild alopecia. In pregnancy, concomitant use of natriuretics and low-Na + diets can contribute to maternal and neonatal Li+ intoxication, and during postpartum diuresis one can anticipate potentially toxic
+ + +

retention of Li+ by the mother. The use of Li + in pregnancy has been associated with neonatal goiter, CNS depression, hypotonia, and cardiac murmur. All of these conditions reverse with time. The use of Li+ in early pregnancy may be associated with an increase in the incidence of cardiovascular anomalies of the newborn, especially Ebstein's malformation (Cohen et al., 1994). The basal risk of Ebstein's anomaly (malformed tricuspid valve, usually with a septal defect) of about 1 per 20,000 live births may rise severalfold, but probably not above 1 per 5000. Moreover, the defect typically is detectable in utero by ultrasonography and often is surgically correctable after birth. In contrast, the antimanic anticonvulsants valproic acid and perhaps carbamazepine have an associated risk of irreversible spina bifida that may exceed 1 per 100 and so do not represent a rational alternative (Viguera et al., 2000). In balancing the risk vs. benefit of using Li+ in pregnancy, it is important to evaluate the risk of untreated manic-depressive disorder and to consider conservative measures, such as deferring intervention until symptoms arise or using a safer treatment, such as a neuroleptic or ECT (seeCohen et al., 1994; Viguera et al., 2000). Treatment of Lithium Intoxication There is no specific antidote for Li+ intoxication, and treatment is supportive. Vomiting induced by rapidly rising plasma Li+ may tend to limit absorption, but fatalities have occurred. Care must be taken to assure that the patient is not Na+- and water-depleted. Dialysis is the most effective means of removing the ion from the body and should be considered in severe poisonings, i.e., in patients exhibiting symptoms of toxicity or patients with serum Li+ concentrations greater than 4.0 mEq/l in acute overdoses or greater than 1.5 mEq/l in chronic overdoses. Interactions with Other Drugs Interactions between Li+ and diuretics and nonsteroidal antiinflammatory agents have been discussed above (seeSiegel et al., 1998). Thiazide diuretics as well as amiloride may correct the nephrogenic diabetes insipidus caused by Li+ (Boton et al., 1987). Retention of Li+ may be limited during administration of the weakly natriuretic agent amiloride as well as the loop diuretic furosemide, which also reduce the risk of toxic effects of hypokalemia with excessive circulating levels of Li+. Furosemide also may have lesser interactions with Li+ than do the thiazides. Amiloride and other diuretic agents (sometimes with reduced doses of Li+ ) have been used safely to reverse the syndrome of diabetes insipidus occasionally associated with Li+ therapy (Batlle et al., 1985; Boton et al., 1987; seeChapter 29: Diuretics). Li+ often is used in conjunction with antipsychotic, sedative, antidepressant, and anticonvulsant drugs. A few case reports have suggested a risk of increased CNS toxicity with Li+ when it is combined with haloperidol; however, this finding is at variance with many years of experience with this combination. Antipsychotic drugs may prevent nausea, which can be a sign of Li+ toxicity. There is, however, no absolute contraindication to the concurrent use of Li+ and psychotropic drugs. Finally, anticholinergic and other agents that alter gastrointestinal motility also may alter Li+ concentrations in blood over time. Therapeutic Uses The use of Li+ in bipolar disorder (manic-depressive illness) is discussed below. Treatment with Li+ is conducted ideally in cooperative patients with normal Na+ intake and with normal cardiac and renal function. Occasionally, patients with severe systemic illnesses can be treated with Li+, provided that the indications are sufficiently compelling. Treatment of acute mania and the prevention of recurrences of mania in otherwise-healthy adults or adolescents currently are the only uses approved by the United States Food and Drug Administration (FDA), even though the primary indication for Li + treatment is for long-term prevention of recurrences of major affective illness, particularly both mania and depression in bipolar I or II disorders (seeBaldessarini et al., 1996b;

Goodwin and Jamison, 1990; Shulman et al., 1996; Tondo et al., 1998a). In addition, on the basis of compelling evidence of efficacy, Li+ sometimes also is used as an alternative or adjunct to antidepressants in severe recurrent depression, as a supplement to antidepressant treatment in acute major depression, or as an adjunct when later response to an antidepressant alone is unsatisfactory (seeAustin et al., 1991; Bauer and Dpfmer, 1999). These beneficial effects in major depression may be associated with the presence of clinical or biological features also found in bipolar affective disorder (seeGoodwin and Jamison, 1990; Baldessarini et al., 1996b). Growing clinical experience also suggests the utility of Li+ in the management of childhood disorders that are marked by adultlike manic-depression or by severe changes in mood and behavior, which are probable precursors to better known bipolar disorder in adults (seeBaldessarini et al., 1996b; Faedda et al., 1995). Most preparations currently used in the United States are tablets or capsules of lithium carbonate. Slow-release preparations of lithium carbonate also are available, as is a liquid preparation of lithium citrate (with 8 mEq of Li+, equivalent to 300 mg of carbonate salt, per 5 ml or 1 teaspoonful of citrate liquid). Salts other than the carbonate have been used, but the carbonate salt is favored for tablets and capsules because it is relatively less hygroscopic and less irritating to the gut than other salts, especially the chloride salt. Li+ is not prescribed merely by dose; instead, because of its low therapeutic index, determination of the concentration of the ion in blood is crucial. Li+ cannot be used with adequate safety in patients who cannot be tested regularly. Concentrations considered to be effective and acceptably safe are between 0.60 and 1.25 mEq per liter; the range of 0.9 to 1.1 mEq per liter is favored for treatment of acutely manic or hypomanic patients. Somewhat lower values (0.6 to 0.75 mEq per liter) are considered adequate and are safer for long-term use for prevention of recurrent manic-depressive illness; some patients may not relapse at concentrations as low as 0.5 to 0.6 mEq per liter, and lower levels usually are better tolerated (Maj et al., 1986; Tondo et al., 1998a). These concentrations refer to serum or plasma samples obtained at 10 to 12 hours after the last oral dose of the day. The recommended concentration usually is attained by doses of 900 to 1500 mg of lithium carbonate per day in outpatients and 1200 to 2400 mg per day in hospitalized manic patients; the optimal dose tends to be larger in younger and heavier individuals. Serum concentrations of Li+ have been found to follow a clear dose-effect relationship between 0.4 and 0.9 mEq per liter, with a corresponding dose-dependent rise in polyuria and tremor as indices of adverse effects, and little gain in benefit at levels above 0.75 mEq per liter (Maj et al., 1986). This pattern indicates the need for individualization of serum levels to obtain a favorable risk/benefit relationship. Li+ has been evaluated in many additional disorders marked by an episodic course, including premenstrual dysphoria, episodic alcohol abuse, and episodic violence (seeBaldessarini et al., 1996b). Evidence of efficacy in most of these conditions has been unconvincing. The side effects of the Li+ ion have been exploited in the management of hyperthyroidism and the syndrome of inappropriate ADH secretion, as well as in the reversal of spontaneous or drug-induced leukopenias, but usually with limited benefit. Drug Treatment of Mania The modern treatment of the manic, depressive, and mixed-mood phases of bipolar disorder was revolutionized by the introduction of lithium in 1949, its gradual acceptance worldwide by the 1960s, and late official acceptance in the United States in 1970 for acute mania only and now primarily for prevention of recurrences of mania. Lithium is effective in acute mania but is now not often employed as a sole treatment due to its slow onset of action and potential difficulty in safe

management in a highly agitated and uncooperative manic patient. Initially, an antipsychotic or potent sedative benzodiazepine (such as lorazepam or clonazepam) commonly is used to attain a degree of control of acute agitation (Licht, 1998; Tohen and Zarate, 1998). Alternatively, sodium valproate can bring about rapid antimanic effects (Pope et al., 1991; Bowden et al., 1994), particularly with doses as high as 30 mg/kg and later 20 mg/kg daily, with serum concentrations of 90 to 120 g/ml (Grunze et al., 1999; Hirschfeld et al., 1999). Li+ then can be introduced more safely for longer-term mood stabilization, or the anticonvulsant may be continued alone. Li+ or an alternative antimanic agent usually is continued for at least several months after full recovery from a manic episode due to a high risk of relapse or of cycling into depression within 12 months (seeGoodwin and Jamison, 1990). The clinical decision to recommend more prolonged maintenance treatment is based on balancing the frequency and severity of past episodes of manic-depressive illness, the age and estimated reliability of the patient, and the risk of side effects (seeBaldessarini et al., 1996b; Zarin and Pass, 1987). Li+ remains by far the most securely established long-term treatment to prevent recurrences of mania and bipolar depression (Baldessarini and Tondo, 2000; Davis et al., 1999; Goodwin and Jamison, 1990). There also is compelling evidence of Li+ lowering risk of suicide substantially (Tondo and Baldessarini, 2000). The potential clinical utility of Li+ in conditions other than recurrences of mania or depression in bipolar I disorder was considered above. Applications include adjunctive use in patients who present clinically with major depression and have only mild mood elevations or hypomania (bipolar II disorder) and adjunctive use in severe, especially melancholic, apparently nonbipolar recurrent major depression. Owing to the limited tolerability of Li+ and its imperfect protection from recurrences of bipolar illness, antimanic anticonvulsants, particularly carbamazepine and valproic acid or its sodium salt, also are increasingly employed prophylactically in bipolar disorder on an empirical basis. However, their long-term research support remains limited and inconclusive (Calabrese et al., 1992, 1995; Davis et al., 1999; Bowden et al., 2000; Davis et al., 2000). There is even growing evidence for the inferiority of carbamazepine to lithium (Dardennes et al., 1995; Davis et al., 1999; Denicoff et al., 1997; Greil et al., 1997; Post et al., 1998; Post, 2000). The relevant pharmacology and dosing guidelines for these agents in the treatment of epilepsy are provided in Chapter 21: Drugs Effective in the Therapy of the Epilepsies. Doses established for their anticonvulsant effects are assumed to be appropriate for the treatment of manic-depressive patients, although formal doseresponse studies in psychiatric patients are lacking. Thus, dosing usually is adjusted to provide plasma concentrations of 6 to 12 g/ml for carbamazepine and 60 to 120 g/ml for valproic acid. It also is common to combine Li+ with an anticonvulsant, particularly valproate, when patients fail to be fully protected from recurrences of bipolar illness by monotherapy (Freeman and Stoll, 1998). Antipsychotic drugs commonly are employed empirically to manage psychotic features or failures of prophylaxis against mania in manic-depressive illness (Sernyak et al., 1994), and they have short-term antimanic effects (Segal et al., 1998; Tohen and Zarate, 1998; Tohen et al., 1999). However, there is no credible scientific support for the long-term efficacy of these agents in mood disorders, and the risk of tardive dyskinesia in these syndromes may be even higher than in schizophrenia (Kane, 1999). The empirical use of antimanic-antipsychotic agents in bipolar disorder (particularly in mania and psychosis) is widespread, despite a lack of research demonstrating their long-term benefits. However, the recent availability of atypical antipsychotic agents with a lower risk of tardive dyskinesia and other neurological side effects, clinical experience suggesting a mood-stabilizing action of clozapine, and evidence of antimanic actions of risperidone and olanzapine all suggest that better-tolerated and safer antipsychotic agents now in development should be considered for treatment of bipolar disorder (Tohen et al., 1999; Keck and Licht, 2000).

Other alternatives to lithium and the anticonvulsants have been less well evaluated. Discontinuation of maintenance treatment with Li+ carries a high risk of early recurrences and of suicidal behavior over a period of several months, even if the treatment has been successful for several years; recurrence is much more rapid than is predicted by the natural history of untreated bipolar disorder, in which cycle lengths average about one year (Baldessarini et al., 1996b, 1999; Tondo et al., 1998b). This risk probably can be moderated by slowing the gradual removal of Li + when that is medically feasible (Faedda et al., 1993). Significant risk also is suspected after the rapid discontinuation or even sharp dosage reduction during maintenance treatment with other agents, including antipsychotic, antidepressant, and antianxiety drugs at least (seeBaldessarini et al., 1996b, 1999). This phenomenon affects the design and interpretation of many studies in experimental therapeutics in which an ongoing maintenance treatment is interrupted to compare higher vs. lower doses, an alternative agent, or a placebo. Prospectus Novel Treatments for Psychotic Disorders Acceptance of clozapine for general use stimulated renewed interest in discovering other antipsychotic agents with a low risk of extrapyramidal neurological side effects and high efficacy and without the several potentially serious adverse effects of clozapine discussed above (Baldessarini and Frankenburg, 1991). Several benzepine analogs have been introduced and approved by the FDA for clinical use, including olanzapine and quetiapine. These compounds do not induce seizures and lack the hematological toxicity of clozapine, although olanzapine has a higher incidence of motor side effects than does clozapine, as well as a high risk of weight gain and associated metabolic adverse effects. Other compounds currently in development include several substituted-benzamide analogs of sulpiride, the indole derivatives sertindole and ziprasidone, and the dibenzothiepine analog of clozapine, zotepine (Daniel et al., 1999; Waddington and Casey, 2000). Most of these agents have a complex neuropharmacology resembling that of clozapine, with interactions at several classes of cerebral neurotransmitter receptors. A specific approach stimulated by clozapine is to test agents with antidopaminergic plus other actions, particularly antagonism of central 5-HT2A-serotonin receptors. A lead compound of this type is the benzisoxazole risperidone, discussed above. Other compounds that have a risperidonelike binding profile and potential antipsychotic efficacy include ziprasidone (Daniel et al., 1999), sertindole (recently removed from clinical trials due to cardiac depressant actions; seeWaddington and Casey, 2000), iloperidone (Sainati et al., 1995), and ORG-5222 (Andree et al., 1997). Compounds selective for dopamine receptors other than the D2 subtype have been considered, but so far have shown little evidence of antipsychotic activity. Substituted enantiomeric R(+)benzazepines show high selectivity for D1-dopamine receptors; these include the experimental compounds SKF-83566 and SCH-23390 (Kebabian et al., 1997). A modified, longer-acting tetracyclic analog, ecopipam (SCH-39166), reached clinical trials as a potential atypical antipsychotic agent, but lacked evidence of efficacy (Karlsson et al., 1995). Discovery of several gene products that appear to represent new dopamine receptor subtypes also has encouraged a search for agents selective for them. Agents partially selective for the D3dopamine receptor include several hydroxyaminotetralins [particularly R(+)-7-hydroxy-N, Ndipropylaminotetralin, and the tricyclic analog PD-128,907], hexahydrobenzophenanthridines, nafadotride and BP-897, and others in development (Baldessarini et al., 1993; Watts et al., 1993; Sautel et al., 1995; Kebabian et al., 1997; Pilla et al., 1999). The subtle and atypical functional

activities of cerebral D3 receptors suggest that D3 agonists rather than antagonists may have useful psychotropic effects (Shafer and Levant, 1998; Pilla et al., 1999). D4-dopamine receptors also are of interest because of their very low prevalence in the extrapyramidal basal ganglia and their moderate selectivity for clozapine (Van Tol et al., 1991). Selective D4 antagonists or mixed D 4/5HT2 antagonists, so far, have proved ineffective in treating the psychotic symptoms of schizophrenia (Kramer et al., 1997; Truffinet et al., 1999). However, D4-selective compounds may emerge as clinically innovative treatments for other neuropsychiatric disorders genetically associated with dopamine D4 receptors, including attention deficit hyperactivity disorder (Tarazi and Baldessarini, 1999). In general, the rate of development of novel antipsychotic agents has again slowed following a burst of innovation that led to several new drugs currently in clinical use or advanced clinical trials. Novel principles are needed, particularly involving targets other than dopamine receptors, which have dominated antipsychotic drug development for a half-century. Novel Treatments for Bipolar Disorder The clinical success of valproate and carbamazepine as antimanic agents has strongly encouraged further exploration of other anticonvulsant agents, including older agents such as primidone and those that may act by enhancing the function of GABA as a key central inhibitory transmitter (Keck and McElroy, 1998; Manji et al., 2000; Post et al., 1998; Post, 2000). A growing number of such compounds are being introduced into neurological practice (seeChapter 21: Drugs Effective in the Therapy of the Epilepsies). Several also are in postmarketing evaluation for potential psychiatric applications, including gabapentin, lamotrigine, oxcarbazepine, tiagabine, and topiramate (seeFerrier and Calabrese, 2000; Post, 2000). For bipolar disorder, a critical challenge is to develop effective antidepressants that do not induce mania as well as mood-stabilizing agents that consistently outperform lithium and with greater safety (seeBaldessarini et al., 1996b; Stoll et al., 1994). Lamotrigine has been found effective in bipolar depression with minimal risk of inducing mania (Calabrese et al., 1999). Because the sedative-anticonvulsant benzodiazepine clonazepam has useful short-term antimanic or sedative effects, it and lorazepam commonly are used adjunctively in the immediate control of manic excitement (Baldessarini et al., 1996b). Whether or not the anticonvulsant properties of clonazepam actually are greater than those of other potent benzodiazepines and whether or not such agents have a potential for providing a long-term mood-stabilizing action remain uncertain (seeChapter 21: Drugs Effective in the Therapy of the Epilepsies; Bradwejn et al., 1990). Despite their theoretical plausibility, central antiadrenergic drugs have not been considered seriously for the treatment of mania, perhaps due to expectations of excessive sedation or hypotension. In addition to agents acting on central adrenergic receptors, other antihypertensive agentsnotably certain lipophilic, highly centrally active L-type Ca2+-channel blockers including nimodipine and other dihydropyridinesdeserve further exploration as mood-stabilizing agents (seeDubovsky, 1998; Pazzaglia et al., 1998). Given the several shared actions of lithium and valproate, it may be possible to develop novel antimanic agents that act directly on effector mechanisms that mediate the actions of adrenergic and other neurotransmitter receptors (Manji et al., 1999b). These include drugs that affect protein kinase C, such as the antiestrogen tamoxifen (Bebchuk et al., 2000) as well as other novel kinase-inhibiting agents that are under experimental development.

Finally, among natural products, long-chain, unsaturated, omega-3 fatty acids (including docosohexaenoic and linoleic acids) found in seed oils and particularly concentrated in fish flesh oils may have at least moderate mood-stabilizing effects and seem to be particularly helpful in bipolar depression. At least one controlled trial supports this approach (Stoll et al., 1999). For further discussion of psychiatric disorders, seeChapter 371 in Harrison's Principles of Internal Medicine, 16th ed., McGraw-Hill, New York, 2005.

Chapter 21. Drugs Effective in the Therapy of the Epilepsies


Overview The epilepsies are common and frequently devastating disorders, affecting approximately 2.5 million people in the United States alone. More than 40 distinct forms of epilepsy have been identified. Epileptic seizures often cause transient impairment of consciousness, leaving the individual at risk of bodily harm and often interfering with education and employment. Therapy is symptomatic in that available drugs inhibit seizures, but neither effective prophylaxis nor cure is available. Compliance with medication is a major problem because of the need for long-term therapy together with unwanted effects of many drugs. The mechanisms of action of antiseizure drugs fall into three major categories. Drugs effective against the most common forms of epileptic seizures, partial and secondarily generalized tonicclonic seizures, appear to work by one of two mechanisms. One is to limit the sustained, repetitive + firing of a neuron, an effect mediated by promoting the inactivated state of voltage-activated Na channels. A second mechanism appears to involve enhanced gamma-aminobutyric acid (GABA) mediated synaptic inhibition, an effect mediated by an action presynaptically for some drugs and postsynaptically for others. Drugs effective against a less common form of epileptic seizure, absence seizure, limit activation of a particular voltage-activated Ca2+ channel known as the T current. Although many treatments are available, much effort is being devoted to novel approaches. Many of these approaches center on elucidating the genetic, cellular, and molecular mechanisms of the hyperexcitability, insights that promise to provide specific targets for novel therapies. Terminology and Epileptic Seizure Classification The term seizure refers to a transient alteration of behavior due to the disordered, synchronous, and rhythmic firing of populations of brain neurons. The term epilepsy refers to a disorder of brain function characterized by the periodic and unpredictable occurrence of seizures. Seizures can be "nonepileptic" when evoked in a normal brain by treatments such as electroshock or chemical convulsants or "epileptic" when occurring without evident provocation. Pharmacological agents in current clinical use inhibit seizures, and thus are referred to as antiseizure drugs. Whether any of these agents has prophylactic value in preventing development of epilepsy (epileptogenesis) is uncertain. Seizures are thought to arise from the cerebral cortex, and not from other central nervous system (CNS) structures such as thalamus, brainstem, or cerebellum. Epileptic seizures have been classified into partial seizures, those beginning focally in a cortical site, and generalized seizures, those that involve both hemispheres widely from the outset (Commission, 1981). The behavioral

manifestations of a seizure are determined by the functions normally served by the cortical site at which the seizure arises. For example, a seizure involving motor cortex is associated with clonic jerking of the body part controlled by this region of cortex. A simple partial seizure is associated with preservation of consciousness. A complex partial seizure is associated with impairment of consciousness. The majority of complex partial seizures originate from the temporal lobe. Examples of generalized seizures include absence, myoclonic, and tonic-clonic. The type of epileptic seizure determines the drug selected for therapy. More detailed information is presented in Table 211. Apart from this epileptic seizure classification, an additional classification specifies epileptic syndromes, which refer to a cluster of symptoms frequently occurring together and include seizure types, etiology, age of onset, and other factors (Commission, 1989). More than 40 distinct epileptic syndromes have been identified. The epileptic syndromes have been categorized into partial versus generalized epilepsies. The partial epilepsies may consist of any of the partial seizure types (Table 211) and account for roughly 60% of all epilepsies. The etiology commonly consists of a lesion in some part of the cortex, such as a tumor, developmental malformation, damage due to trauma or stroke, etc. Such lesions often are evident on brain imaging studies such as magnetic resonance imaging. Alternatively, the etiology may be genetic. The generalized epilepsies are characterized most commonly by one or more of the generalized seizure types listed in Table 211 and account for approximately 40% of all epilepsies. The etiology is usually genetic. The most common generalized epilepsy is referred to as juvenile myoclonic epilepsy, accounting for approximately 10% of all epileptic syndromes. The age of onset is in the early teens, and the condition is characterized typically by myoclonic and tonic-clonic and often absence seizures. Like most of the generalized-onset epilepsies, juvenile myoclonic epilepsy is a complex genetic disorder that is probably due to inheritance of multiple susceptibility genes; there is a familial clustering of cases, but the pattern of inheritance is not mendelian. To date, the classification of epileptic syndromes has had more of an impact on guiding clinical assessment and management than on selection of antiseizure drugs. Nature and Mechanisms of Seizures and Antiseizure Drugs Partial Epilepsies More than a century ago, John Hughlings Jackson, the father of modern concepts of epilepsy, proposed that seizures were caused by "occasional, sudden, excessive, rapid and local discharges of gray matter," and that a generalized convulsion resulted when normal brain tissue was invaded by the seizure activity initiated in the abnormal focus. This insightful proposal provided a valuable framework for thinking about mechanisms of partial epilepsy. The advent of the electroencephalogram (EEG) in the 1930s permitted the recording of electrical activity from the scalp of human beings with epilepsy and demonstrated that the epilepsies are disorders of neuronal excitability. The pivotal role of synapses in mediating communication among neurons in the mammalian brain suggested that defective synaptic function might lead to a seizure. That is, a reduction of inhibitory synaptic activity or enhancement of excitatory synaptic activity might be expected to trigger a seizure; pharmacological studies of seizures supported this notion. The neurotransmitters mediating the bulk of synaptic transmission in the mammalian brain are amino acids, gamma-aminobutyric acid (GABA) and glutamate being the principal inhibitory and excitatory neurotransmitters, respectively (see Chapter 12: Neurotransmission and the Central Nervous System). Pharmacological studies disclosed that microinjection of antagonists of the GABAA receptor or of agonists of different glutamate-receptor subtypes (NMDA, AMPA, or kainic acid; see Chapter 12: Neurotransmission and the Central Nervous System) triggers seizures in experimental animals in

vivo. Pharmacological agents that enhance GABA-mediated synaptic inhibition inhibit seizures in diverse models. Glutamate-receptor antagonists also inhibit seizures in diverse models, including seizures evoked by electroshock and chemical convulsants such as pentylenetetrazol. Such studies support the idea that pharmacological regulation of synaptic function can regulate the propensity for seizures, and they provide a framework for electrophysiological analyses aimed at elucidating the role of both synaptic and nonsynaptic mechanisms in expression of seizures and epilepsy. Progress in techniques of electrophysiology has fostered the progressive refinement of the level of analysis of seizure mechanisms from the EEG to populations of neurons (field potentials) to individual neurons to individual synapses and individual ion channels on individual neurons. Cellular electrophysiological studies of epilepsy over roughly two decades beginning in the mid1960s were focused on elucidating the mechanisms underlying the depolarization shift (DS), the intracellular correlate of the "interictal spike" (Figure 211). The interictal (or between-seizures) spike is a sharp waveform recorded in the EEG of patients with epilepsy; it is asymptomatic in that it is accompanied by no detectable change in the patient's behavior. The location of the interictal spike helps localize the brain region from which seizures originate in a given patient. The DS consists of a large depolarization of the neuronal membrane associated with a burst of action potentials. In most cortical neurons, the DS is generated by a large excitatory synaptic current that can be enhanced by activation of voltage-regulated intrinsic membrane currents (for review see Dichter and Ayala, 1987). Although the mechanisms generating the DS are increasingly understood, it remains unclear whether the interictal spike triggers a seizure, inhibits a seizure, or is an epiphenomenon with respect to seizure occurrence in an epileptic brain. While these questions remain unanswered, study of the mechanisms of DS generation set the stage for inquiry into the cellular mechanisms of a seizure.

Figure 211. Relations among Cortical EEG, Extracellular, and Intracellular Recordings in a Seizure Focus Induced by Local Application of a Convulsant Agent to Mammalian Cortex. The extracellular recording was made through a high-pass filter. Note the high-frequency firing of the neuron evident in both extracellular and intracellular recording during the paroxysmal depolarization shift (PDS). (Modified from Ayala et al., 1973, with permission.)

During the 1980s, a diversity of in vitro models of seizures were developed in isolated brain slice preparations, in which many synaptic connections are preserved. Electrographic events with features similar to those recorded during seizures in vivo have been produced in hippocampal slices by multiple methods, including altering ionic constituents of media bathing brain slices (for review see McNamara, 1994) such as low Ca2+ , zero Mg2+, or elevated K+ . The accessibility and experimental control provided by these preparations has permitted mechanistic investigations. Analyses of multiple in vitro models confirmed the importance of synaptic function in initiation of a seizure, demonstrating that subtle (e.g., 20%) reductions of inhibitory synaptic function could lead to epileptiform activity and that activation of excitatory synapses could be pivotal in initiation of a seizure. Many other important factors were identified, including the volume of the extracellular space as well as intrinsic properties of a neuron such as voltage-regulated ion channels including + + 2+ those gating K , Na , and Ca ions (see Traynelis and Dingledine, 1988). Identification of these diverse synaptic and nonsynaptic factors controlling seizures in vitro provides potentially valuable pharmacological targets for regulating seizure susceptibility in vivo.

An additional line of investigation has centered on understanding the mechanisms by which a normal brain is transformed into an epileptic brain. Some common forms of partial epilepsy arise months to years after injury of cortex sustained as a consequence of stroke, trauma, or other factors. An effective prophylaxis administered to patients at high risk would be highly desirable. The drugs described in this chapter provide symptomatic therapy; that is, the drugs inhibit seizures in patients with epilepsy. No effective antiepileptogenic agent has been identified. Understanding the mechanisms of epileptogenesis in cellular and molecular terms would provide a framework for development of novel therapeutic approaches. The availability of animal models provides an opportunity to investigate the underlying mechanisms. One model, termed "kindling," is induced by periodic administration of brief, low-intensity electrical stimulation of the amygdala or other limbic structures. Initial stimulations evoke a brief electrical seizure recorded on the EEG without behavioral change, but repeated (e.g., 10 to 20) stimulations result in progressive intensification of seizures, culminating in tonic-clonic seizures (Goddard et al., 1969). Once established, the enhanced sensitivity to electrical stimulation persists for the life of the animal. Despite the exquisite propensity to intense seizures, spontaneous seizures or a truly epileptic condition do not occur until 100 to 200 stimulations have been administered. The ease of control of kindling induction (i.e., stimulations administered at the investigator's convenience), its graded onset, and the ease of quantitating epileptogenesis (number of stimulations required to evoke tonicclonic seizures) simplify experimental study. Pharmacological studies have demonstrated that interventions limiting activation of the NMDA subtype of glutamate receptor or the trkB-subtype of neurotrophin receptor inhibit epileptogenesis in this model. These pharmacological data provide valuable clues to the cellular and molecular mechanisms underlying epileptogenesis in this model. Two additional models are produced by administration of a convulsive chemical, kainic acid or pilocarpine, resulting in an intense limbic and tonic-clonic status epilepticus lasting hours. In both models, the fleeting episode of status epilepticus is followed weeks later by the onset of spontaneous seizures (Lemos and Cavalheiro, 1995; Longo and Mello, 1998), an intriguing parallel to the scenario of complicated febrile seizures in young children followed by the emergence of spontaneous seizures years later. In contrast to the limited or no neuronal loss characteristic of the kindling model, overt destruction of hippocampal neurons occurs in both the pilocarpine and kainate models, reflecting aspects of hippocampal sclerosis observed in human beings with severe limbic seizures. Indeed, the recent discovery that complicated febrile seizures can cause hippocampal sclerosis in young children (VanLandingham et al., 1998) establishes yet another commonality between these models and the human condition. Several questions arise with respect to these models. What transpires during the latent period between status epilepticus induced by pilocarpine or kainate and emergence of spontaneous seizures that causes the epilepsy? Might similar mechanisms be operative in kindling development and during the latent period following status epilepticus? Might an antiepileptogenic agent that was effective in one of these models be effective in the other models? Important insights into the mechanisms of action of drugs that are effective against partial seizures have emerged in the past two decades (Macdonald and Greenfield, 1997). These insights have emerged in large part from electrophysiological studies of relatively simple in vitro models, such as neurons isolated from the mammalian CNS and maintained in primary culture. The experimental control and accessibility provided by these models together with careful attention to clinically relevant concentrations of the drugs led to clarification of the mechanisms. Although it is difficult to prove unequivocally that a given drug effect observed in vitro is both necessary and sufficient to inhibit a seizure in an animal or human being in vivo, there is an excellent likelihood that the

putative mechanisms identified do in fact underlie the clinically relevant antiseizure effects. Electrophysiological analyses of individual neurons during a partial seizure demonstrate that the neurons undergo depolarization and fire action potentials at high frequencies (Figure 211). This pattern of neuronal firing is characteristic of a seizure and is uncommon during physiological neuronal activity. Thus, selective inhibition of this pattern of firing would be expected to reduce seizures with minimal unwanted effects. Carbamazepine, lamotrigine, phenytoin, and valproic acid inhibit high-frequency firing at concentrations known to be effective at limiting seizures in human beings (Macdonald and Greenfield, 1997). Inhibition of the high-frequency firing is thought to be mediated by reducing the ability of Na+ channels to recover from inactivation (Figure 212). That is, depolarization-triggered opening of the Na+ channels in the axonal membrane of a neuron is required for an action potential; after opening, the channels spontaneously close, a process termed inactivation. This inactivation is thought to cause the refractory period, a short time after an action potential during which it is not possible to evoke another action potential. Upon recovery from inactivation, the Na+ channels are again poised to participate in another action potential. Because firing at a slow rate permits sufficient time for Na+ channels to recover from inactivation, inactivation has little or no effect on low-frequency firing. However, reducing the rate of recovery of Na+ channels from inactivation would limit the ability of a neuron to fire at high frequencies, an effect that likely underlies the effects of carbamazepine, lamotrigine, phenytoin, topiramate, valproic acid, and zonisamide against partial seizures. Figure 212. Antiseizure DrugEnhanced Na Channel Inactivation. Some + antiseizure drugs (shown in blue text) prolong the inactivation of the Na channels, thereby reducing the ability of neurons to fire at high frequencies. Note that the inactivated channel itself appears to remain open, but is blocked by the inactivation gate (I). A, activation gate.
+

Insights into mechanisms of seizures suggest that enhancing GABA-mediated synaptic inhibition would reduce neuronal excitability and raise the seizure threshold. Several drugs are thought to inhibit seizures by regulating GABA-mediated synaptic inhibition through an action at distinct sites of the synapse (Macdonald and Greenfield, 1997). The principal postsynaptic receptor of synaptically released GABA is termed the GABAA receptor. Activation of the GABAA receptor effects inhibition of the postsynaptic cell by increasing the flow of Cl ions into the cell, which tends to hyperpolarize the neuron. Clinically relevant concentrations of both benzodiazepines and barbiturates can enhance GABAA receptor-mediated inhibition through distinct actions on the

GABAA receptor (Figure 213). This mechanism probably underlies the effectiveness of these compounds against partial and tonic-clonic seizures in human beings. At higher concentrations, such as might be used for status epilepticus, these drugs also can inhibit high-frequency firing of action potentials. -Vinyl GABA (vigabatrin) is thought to exert its antiseizure action by irreversibly inhibiting an enzyme that degrades GABA, GABA transaminase; this probably leads to increased amounts of GABA available for synaptic release. A third mechanism of enhancing GABA-mediated synaptic inhibition is thought to underlie the antiseizure mechanism of tiagabine; tiagabine inhibits the GABA transporter, GAT-1, and reduces neuronal and glial uptake of GABA (Suzdak and Jansen, 1995) (Figure 213). Figure 213. Enhanced GABA Synaptic Transmission. In the presence of GABA, the GABAA receptor (structure on left) is opened, allowing an influx of Cl, which in turn, increases membrane polarization (see also Chapter 17: Hypnotics and Sedatives). Some antiseizure drugs (shown in larger blue text) act by reducing the metabolism of GABA. Others act at the GABAA receptor, enhancing Cl influx in response to GABA. As outlined in the text, gabapentin acts presynaptically to promote GABA release; its molecular target is currently under investigation. GABA-T, GABA transaminase. GAT-1, GABA transporter.

Generalized-Onset Epilepsies: Absence Seizures In contrast to partial seizures, which arise from localized regions of the cerebral cortex, generalizedonset seizures arise from the reciprocal firing of the thalamus and cerebral cortex (see Coulter, 1998, for review). Among the diverse forms of generalized seizures, absence seizures have been most intensively studied. The striking synchrony in appearance of generalized seizure discharges in widespread areas of neocortex led to the idea that a structure in the thalamus and/or brainstem (the "centrencephalon") synchronized these seizure discharges (Penfield and Jasper, 1947). Attention on the thalamus in particular emerged from the demonstration that low-frequency stimulation of midline thalamic structures triggered EEG rhythms in the cortex similar to spike-wave discharges characteristic of absence seizures (Jasper and Droogleever-Fortuyn, 1947). Intracerebral electrode recordings from human beings subsequently demonstrated the presence of thalamic and neocortical involvement in the spike-and-wave discharge of absence seizures. Many of the structural and functional properties of thalamus and neocortex that lead to the generalized spike-and-wave discharges have been elucidated in the past decade (Coulter, 1998). The EEG hallmark of an absence seizure is generalized spike-and-wave discharges at a frequency of 3 per second. These bilaterally synchronous spike-and-wave discharges, recorded locally from electrodes in both the thalamus and the neocortex, represent oscillations between thalamus and neocortex. A comparison of EEG and intracellular recordings reveals that the EEG spikes are associated with the firing of action potentials and the following slow wave with prolonged inhibition. These reverberatory, low-frequency rhythms are made possible by a combination of factors, including reciprocal excitatory synaptic connections between neocortex and thalamus as well as intrinsic properties of neurons in the thalamus (see Coulter, 1998, for review). One intrinsic property of thalamic neurons that is pivotally involved in the generation of the 3-per-second spike and wave is a particular form of voltage-regulated Ca2+ current, the low threshold ("T") current. In contrast to its small size in most neurons, the T current in many neurons throughout the thalamus has a large amplitude. Indeed bursts of action potentials in thalamic neurons are mediated by activation of the T current. The T current plays an amplifying role in thalamic oscillations, one oscillation being the 3-per-second spike and wave of the absence seizure. Importantly, the principal mechanism by which most antiabsence-seizure drugs (ethosuximide, trimethadione, valproic acid) are thought to act is by inhibition of the T current (Figure 214; Macdonald and Kelly, 1993). Thus, inhibiting voltage-regulated ion channels is a common mechanism of action of antiseizure drugs, + antipartial-seizure drugs inhibiting voltage-activated Na channels and antiabsence-seizure drugs 2+ inhibiting voltage-activated Ca channels. Figure 214. Antiseizure Drug-Induced Reduction of Current through T-Type Ca2+ Channels. Some antiepileptic drugs (shown in blue text) reduce the flow of Ca2+ through T-type Ca2+ channels (see also Chapter 12: Neurotransmission and the Central Nervous System), thus reducing the pacemaker current that underlies the thalamic rhythm in spikes and waves seen in generalized absence seizures.

Genetic Approaches to the Epilepsies Genetic causes contribute to a wide diversity of human epilepsies. Genetic causes are solely responsible for some rare forms inherited in a mendelian patternfor example, autosomal dominant or autosomal recessive. Genetic causes also are mainly responsible for some common forms such as juvenile myoclonic epilepsy (JME) or childhood absence epilepsy (CAE), disorders likely due to inheritance of two or more susceptibility genes. Genetic determinants also may contribute some degree of risk to epilepsies caused by injury of the cerebral cortex (Berkovic, 1998). Enormous progress has been made in understanding the genetics of mammalian epilepsy in the past several years. Whereas prior to 1994, a specific gene defect had been identified in only a single mouse with a phenotype of cortical epilepsy, more than 33 single gene mutations now have been linked to an epileptic phenotype (Puranam and McNamara, 1999). This progress has been paralleled by the genetics of human epilepsy. Prior to 1990, not a single gene causing a form of human epilepsy had been identified; mutations of more than a dozen such genes now have been identified. Most of the human epilepsies for which mutant genes have been identified are symptomatic epilepsies, in which the epilepsy seems to be a manifestation of some profound neurodegenerative disease. However, most patients with epilepsy are neurologically normal. It is not clear the extent to which the mechanisms underlying the hyperexcitability in a neurologically devastating disease inform mechanisms operative in epilepsies in which the patient is otherwise normal (idiopathic epilepsies). The mutant genes have been identified in four distinct forms of idiopathic human epilepsy. Remarkably, each of the mutant genes encodes an ion channel gated by voltage or a neurotransmitter; this is of particular interest because several other episodic disorders involving other organs also are caused by mutations of genes encoding ion channels. That is, episodic disorders of the heart (cardiac arrhythmias), skeletal muscle (periodic paralyses), cerebellum (episodic ataxia), vasculature (familial hemiplegic migraine), and other organs all have been linked to mutant genes encoding a component of a voltage-gated ion channel (Ptacek, 1997). The four idiopathic human epilepsies for which the mutant genes have been identified are the following. Generalized epilepsy with febrile seizures (GEFS+) is caused by a point mutation in the + subunit of a voltage-gated Na channel (SCN1B). Interestingly, several antiseizure drugs act on + + Na channels to promote their inactivation; the phenotype of the mutant Na channel appears to involve defective inactivation (Wallace et al., 1998). Two forms of benign familial neonatal convulsions have been shown to be caused by mutations of two distinct but related novel K+channel genes, KCNQ2 and KCNQ3 (Biervert et al., 1998; Singh et al., 1998; Charlier et al., 1998).

Autosomal dominant, nocturnal, frontal-lobe epilepsy is a fourth form of idiopathic epilepsy for which a mutant gene has been identified, the mutant gene encoding the 4 subunit of the nicotinic cholinergic receptor (CHRNA4) (Steinlein et al., 1995). Each of these is a rare syndrome, and together, these four forms likely account for well less than 1% of all of the human epilepsies. In no instance is it yet clear how the genotype leads to the phenotype of epilepsy. Identification of the genes will lead to generation of mutant mice expressing the phenotype; the mutant animals should provide powerful tools with which to elucidate how the genotype produces the phenotype. Importantly, the mutant channels suggest some intriguing molecular targets for development of antiseizure drugs acting by novel mechanisms. These initial successes suggest that many additional epilepsy genes will be identified in the next several years. Antiseizure Drugs: General Considerations History Phenobarbital was the first synthetic organic agent recognized as having antiseizure activity (Hauptmann, 1912); its sedative properties led investigators to test and demonstrate its effectiveness for suppressing seizures. In a landmark discovery, Merritt and Putnam (1938a) developed the electroshock seizure test in experimental animals to screen chemical agents for antiseizure effectiveness; in the course of screening a variety of drugs, they discovered that phenytoin suppressed seizures in the absence of sedative effects. The electroshock seizure test is extremely valuable, because drugs that are effective against tonic hindlimb extension induced by electroshock generally have proven to be effective against partial and tonic-clonic seizures in human beings. Another screening test, seizures induced by the chemoconvulsant pentylenetetrazol, is useful in identifying drugs that are effective against absence seizures in human beings. These screening tests remain useful even now. The chemical structures of most of the drugs introduced before 1965 were closely related to phenobarbital. These include the hydantoins, the oxazolidinediones, and the succinimides. The agents introduced after 1965 exhibit a diversity of chemical structures. These include benzodiazepines (clonazepam and clorazepate), an iminostilbene (carbamazepine), a branched-chain carboxylic acid (valproic acid), a phenyltriazine (lamotrigine), a cyclic analog of GABA (gabapentin), a sulfamate-substituted monosaccharide (topiramate), a nipecotic acid derivative (tiagabine), and a pyrrolidine derivative (levetiracetam). Therapeutic Aspects The ideal antiseizure drug would suppress all seizures without causing any unwanted effects. Unfortunately, the drugs used currently not only fail to control seizure activity in some patients, but they frequently cause unwanted effects that range in severity from minimal impairment of the CNS to death from aplastic anemia or hepatic failure. The physician who treats patients with epilepsy is thus faced with the task of selecting the appropriate drug or combination of drugs that best controls seizures in an individual patient at an acceptable level of untoward effects. It is generally held that complete control of seizures can be achieved in up to 50% of patients and that another 25% can be improved significantly. The degree of success varies as a function of seizure type, cause, and other factors. To minimize toxicity, treatment with a single drug should be sought. If seizures are not controlled at adequate plasma concentrations of the initial agent, substitution of a second drug is preferred to the concurrent administration of another agent. However, multiple-drug therapy may be required, especially when two or more types of seizure occur in the same patient. Measurement of drug concentrations in plasma facilitates optimizing antiseizure medication,

especially when therapy is initiated, after dosage adjustments, in the event of therapeutic failure, when toxic effects appear, or when multiple-drug therapy is instituted. However, clinical effects of some drugs do not correlate well with their concentrations in plasma, and recommended concentrations are only guidelines for therapy. The ultimate therapeutic regimen must be determined by clinical assessment of effect and toxicity. The general principles of the drug therapy of the epilepsies are summarized below, following discussion of the individual agents. Details of diagnosis and therapy can be found in the monographs and reviews listed at the end of the chapter. Hydantoins Phenytoin Phenytoin (diphenylhydantoin; DILANTIN; DIPHENYLAN ) is effective against all types of partial and tonic-clonic seizures but not absence seizures. Properties of other hydantoins (mephenytoin, ethotoin) are described in earlier editions of this book. History Phenytoin was first synthesized in 1908 by Biltz, but its anticonvulsant activity was not discovered until 1938 (Merritt and Putnam, 1938a,b). In contrast to the earlier accidental discovery of the antiseizure properties of bromide and phenobarbital, phenytoin was the product of a search among nonsedative structural relatives of phenobarbital for agents capable of suppressing electroshock convulsions in laboratory animals. It was introduced for the treatment of epilepsy in the same year. The discovery of phenytoin was a signal advance. Since this agent is not a sedative in ordinary doses, it established that antiseizure drugs need not induce drowsiness and encouraged the search for drugs with selective antiseizure action. Structure-Activity Relationship Phenytoin has the following structural formula:

A 5-phenyl or other aromatic substituent appears essential for activity against generalized tonicclonic seizures. Alkyl substituents in position 5 contribute to sedation, a property absent in phenytoin. The position 5 carbon permits asymmetry, but there appears to be little difference in activity between isomers. Pharmacological Effects Central Nervous System Phenytoin exerts antiseizure activity without causing general depression of the CNS. In toxic doses

it may produce excitatory signs and at lethal levels a type of decerebrate rigidity. The most significant effect of phenytoin is its ability to modify the pattern of maximal electroshock seizures. The characteristic tonic phase can be abolished completely, but the residual clonic seizure may be exaggerated and prolonged. This seizure-modifying action is observed with many other antiseizure drugs that are effective against generalized tonic-clonic seizures. By contrast, phenytoin does not inhibit clonic seizures evoked by pentylenetetrazol. Mechanism of Action Phenytoin limits the repetitive firing of action potentials evoked by a sustained depolarization of mouse spinal cord neurons maintained in vitro (McLean and Macdonald, 1983). This effect is mediated by a slowing of the rate of recovery of voltage-activated Na+ channels from inactivation, an action that is both voltage- (greater effect if membrane is depolarized) and use-dependent. These effects of phenytoin are evident at concentrations in the range of therapeutic drug levels in cerebrospinal fluid (CSF) in human beings, concentrations that correlate with the free (or unbound) concentration of phenytoin in the serum. At these concentrations, the effects on Na+ channels are selective, in that no changes of spontaneous activity or responses to iontophoretically-applied GABA or glutamate are detected. At concentrations 5- to 10-fold higher, multiple effects of phenytoin are evident, including reduction of spontaneous activity, enhancement of responses to GABA, and others; these effects may underlie some of the unwanted toxicity associated with high levels of phenytoin. Pharmacokinetic Properties The pharmacokinetic characteristics of phenytoin are influenced markedly by its binding to serum proteins, by the nonlinearity of its elimination kinetics, and by its metabolism by the cytochrome P450 enzyme system. Phenytoin is extensively bound (about 90%) to serum proteins, mainly albumin. Small variations in the percentage of phenytoin that is bound dramatically affect the absolute amount of free (active) drug; increased proportions of free drug are evident in the neonate, in patients with hypoalbuminemia, and in uremic patients. Some agents, such as valproate, can compete with phenytoin for binding sites on plasma proteins; when combined with valproatemediated inhibition of phenytoin metabolism, marked increases in free phenytoin can result. Measurement of free rather than total phenytoin permits direct assessment of this potential problem in patient management. Phenytoin is one of the few drugs for which the rate of elimination varies as a function of its concentration (i.e., the rate is nonlinear). The plasma half-life of phenytoin ranges between 6 and 24 hours at plasma concentrations below 10 g/ml but increases with higher concentrations; as a result, plasma drug concentration increases disproportionately as dosage is increased, even with small adjustments for levels near the therapeutic range. The majority (95%) of phenytoin is metabolized principally in the hepatic endoplasmic reticulum and mainly by the cytochrome P450 isoform CYP2C9/10 and to a lesser extent CYP2C19 (Table 212). The principal metabolite, a parahydroxyphenyl derivative, is inactive. Because its metabolism is saturable, other drugs that are metabolized by these enzymes can inhibit phenytoin's metabolism and produce a rise in phenytoin concentration. Conversely, the degradation rate of other drugs that are substrates for these enzymes can be inhibited by phenytoin; one such drug is warfarin, and addition of phenytoin to a patient receiving warfarin can lead to hypoprothrombinemia. An alternative mechanism of drug interactions arises from phenytoin's ability to induce diverse cytochrome P450 enzymes; coadministration of phenytoin and medications

metabolized by these enzymes can lead to an increased degradation of such medications. Of particular note in this regard are oral contraceptives, which are metabolized by the CYP3A4; treatment with phenytoin could enhance the metabolism of oral contraceptives and lead to unplanned pregnancy. The potential teratogenic effects of phenytoin underscore the importance of attention to this interaction. Carbamazepine, oxcarbazepine, phenobarbital, and primidone also can induce CYP3A4 and likewise might increase degradation of oral contraceptives. The low aqueous solubility of phenytoin resulted in diverse problems for intravenous use and led to production of fosphenytoin, a water-soluble prodrug. Fosphenytoin (CEREBYX) is converted into phenytoin by phosphatases in liver and red blood cells with a half-life of 8 to 15 minutes. Fosphenytoin is useful for adults with partial or generalized seizures when parenteral administration is indicated. Toxicity The toxic effects of phenytoin depend upon the route of administration, the duration of exposure, and the dosage. When fosphenytoin, the water-soluble prodrug, is administered intravenously at an excessive rate in the emergency treatment of status epilepticus, the most notable toxic signs are cardiac arrhythmias, with or without hypotension, and/or CNS depression. Although cardiac toxicity occurs more frequently in older patients and in those with known cardiac disease, it also can develop in young, healthy patients. These complications can be minimized by administering fosphenytoin at a rate of less than 150 mg of phenytoin sodium equivalents per minute. Acute oral overdosage results primarily in signs referable to the cerebellum and vestibular system; high doses have been associated with marked cerebellar atrophy. Toxic effects associated with chronic medication also are primarily dose-related cerebellar-vestibular effects but include other CNS effects, behavioral changes, increased frequency of seizures, gastrointestinal symptoms, gingival hyperplasia, osteomalacia, and megaloblastic anemia. Hirsutism is an annoying untoward effect in young females. Usually, these phenomena can be made bearable by proper adjustment of dosage. Serious adverse effects, including those on the skin, bone marrow, and liver, probably are manifestations of drug allergy. Although rare, they necessitate withdrawal of the drug. Moderate elevation of the concentrations in plasma of enzymes that are used to assess hepatic function sometimes are observed; since these changes are transient and may result in part from induced synthesis of the enzymes, they do not necessitate withdrawal of the drug. Electrophysiological evidence of peripheral neuropathy can occur in up to 30% of patients receiving phenytoin, but this phenomenon usually is not clinically significant. Gingival hyperplasia occurs in about 20% of all patients during chronic therapy and is probably the most common manifestation of phenytoin toxicity in children and young adolescents. It may be more frequent in those individuals who also develop coarsened facial features. The overgrowth of tissue appears to involve altered collagen metabolism. Toothless portions of the gums are not affected. The condition does not necessarily require withdrawal of medication, and it can be minimized by good oral hygiene. A variety of endocrine effects have been reported. Inhibition of release of antidiuretic hormone (ADH) has been observed in patients with inappropriate ADH secretion. Hyperglycemia and glycosuria appear to be due to inhibition of insulin secretion. Osteomalacia, with hypocalcemia and elevated alkaline phosphatase activity, has been attributed to both altered metabolism of vitamin D 2+ and inhibition of intestinal absorption of Ca . Phenytoin also increases the metabolism of vitamin K and reduces the concentration of vitamin Kdependent proteins that are important for normal 2+ Ca metabolism in bone. This may explain why the osteomalacia is not always ameliorated by the administration of vitamin D.

Hypersensitivity reactions include morbilliform rash in 2% to 5% of patients and occasionally more serious skin reactions, including Stevens-Johnson syndrome. Systemic lupus erythematosus and potentially fatal hepatic necrosis have been reported rarely. Hematological reactions include neutropenia and leukopenia. A few instances of red-cell aplasia, agranulocytosis, and mild thrombocytopenia also have been reported. Lymphadenopathy, resembling Hodgkin's disease and malignant lymphoma, is associated with reduced immunoglobulin A (IgA) production. Hypoprothrombinemia and hemorrhage have occurred in the newborns of mothers who received phenytoin during pregnancy; vitamin K is effective treatment or prophylaxis. Plasma Drug Concentrations A good correlation usually is observed between the total concentration of phenytoin in plasma and the clinical effect. Thus, control of seizures generally is obtained with concentrations above 10 g/ml, while toxic effects such as nystagmus develop at concentrations around 20 g/ml. Drug Interactions Concurrent administration of any drug metabolized by the 2C9/10 isoform of cytochrome P450 can increase the plasma concentration of phenytoin by decreasing its rate of metabolism. Carbamazepine, which may enhance the metabolism of phenytoin, causes a well-documented decrease in phenytoin concentration. Conversely, phenytoin reduces the concentration of carbamazepine. Interaction between phenytoin and phenobarbital is variable. Therapeutic Uses Epilepsy Phenytoin is one of the more widely used antiseizure agents, and it is effective against partial and tonic-clonic but not absence seizures. The use of phenytoin and other agents in the therapy of epilepsies is discussed further at the end of this chapter. Various preparations of phenytoin differ significantly in both bioavailability and rate of absorption, and patients should thus be treated with the drug product of a single manufacturer. Other Uses Some cases of trigeminal and related neuralgias appear to respond to phenytoin, but carbamazepine may be preferable. The use of phenytoin in the treatment of cardiac arrhythmias is discussed in Chapter 35: Antiarrhythmic Drugs. Antiseizure Barbiturates The pharmacology of the barbiturates as a class is considered in Chapter 17: Hypnotics and Sedatives; discussion in this chapter is limited to the two barbiturates used for therapy of the epilepsies. Although still marketed, a third barbiturate (metharbital) has virtually disappeared from therapeutic use. Phenobarbital Phenobarbital (LUMINAL , others) was the first effective organic antiseizure agent (Hauptmann, 1912). It has relatively low toxicity, is inexpensive, and is still one of the more effective and widely

used drugs for this purpose. Structure-Activity Relationship The structural formula of phenobarbital (5-phenyl-5-ethylbarbituric acid) is shown in Chapter 17: Hypnotics and Sedatives. The structure-activity relationship of the barbiturates has been studied extensively. Maximal antiseizure activity is obtained when one substituent at position 5 is a phenyl group. The 5,5-diphenyl derivative has less antiseizure potency than does phenobarbital, but it is virtually devoid of hypnotic activity. By contrast, 5,5-dibenzyl barbituric acid causes convulsions. Antiseizure Properties Most barbiturates have antiseizure properties. However, the capacity of some of these agents, such as phenobarbital, to exert maximal antiseizure action at doses below those required for hypnosis determines their clinical utility as antiseizure agents. Phenobarbital is active in most antiseizure tests in animals but is relatively nonselective. It inhibits tonic hindlimb extension in the maximal electroshock model, clonic seizures evoked by pentylenetetrazol, and kindled seizures. Mechanism of Action The mechanism by which phenobarbital inhibits seizures likely involves potentiation of synaptic inhibition through an action on the GABAA receptor. Intracellular recordings of mouse cortical or spinal cord neurons demonstrated that phenobarbital enhances responses to iontophoretically applied GABA (Macdonald and Barker, 1979). These effects have been observed at therapeutically relevant concentrations of phenobarbital. Analyses of single channels in outside-out patches isolated from mouse spinal cord neurons demonstrated that phenobarbital increased the GABA receptor mediated current by increasing the duration of bursts of GABA receptormediated currents without changing the frequency of bursts (Twyman et al., 1989). At levels exceeding therapeutic concentrations, phenobarbital also limits sustained repetitive firing; this may underlie some of the antiseizure effects of higher concentrations of phenobarbital achieved during therapy of status epilepticus. The mechanisms underlying the antiseizure as opposed to the sedative effects of the barbiturates have been enigmatic. That is, pentobarbital inhibits seizures, but at doses that produce marked sedation; by contrast, phenobarbital inhibits seizures at doses that cause minimal sedative effects. Both pentobarbital and phenobarbital enhance GABAA receptormediated currents. Distinctive effects of pentobarbital and phenobarbital on GABA responses and voltage-activated Ca2+ channels may explain this enigma (ffrench-Mullen et al., 1993). The maximal effect of phenobarbital in enhancing GABA responses is only 40% of that of the active isomer of pentobarbital. Moreover, pentobarbital inhibits voltage-activated Ca2+ channels with greater potency than does phenobarbital (ffrench-Mullen et al., 1993); one consequence of inhibition of these Ca2+ channels could be blockade of Ca2+ entry into presynaptic nerve terminals and inhibition of release of neurotransmitters such as glutamate, resulting in net reduction of excitatory synaptic transmission. Thus the powerful sedative actions of pentobarbital could be due to greater maximal enhancement of GABA responses in conjunction with strong inhibition of Ca2+ current. Pharmacokinetic Properties Oral absorption of phenobarbital is complete but somewhat slow; peak concentrations in plasma occur several hours after a single dose. It is 40% to 60% bound to plasma proteins and bound to a similar extent in tissues, including brain. Up to 25% of a dose is eliminated by pH-dependent renal

excretion of the unchanged drug; the remainder is inactivated by hepatic microsomal enzymes. The principal cytochrome P450 responsible is CYP2C9, with minor metabolism by CYP2C19 and 2E1. Phenobarbital induces uridine diphosphate glucuronosyl transferase (UGT) enzymes as well as CYP2C and 3A subfamilies of cytochrome P450. Drugs metabolized by these enzymes can be more rapidly degraded when coadministered with phenobarbital; importantly, oral contraceptives are metabolized by CYP3A4. Toxicity Sedation, the most frequent undesired effect of phenobarbital, is apparent to some extent in all patients upon initiation of therapy, but tolerance develops during chronic medication. Nystagmus and ataxia occur at excessive dosage. Phenobarbital sometimes produces irritability and hyperactivity in children, and agitation and confusion in the elderly. Scarlatiniform or morbilliform rash, possibly with other manifestations of drug allergy, occurs in 1% to 2% of patients. Exfoliative dermatitis is rare. Hypoprothrombinemia with hemorrhage has been observed in the newborn of mothers who have received phenobarbital during pregnancy; vitamin K is effective for treatment or prophylaxis. Megaloblastic anemia that responds to folate and osteomalacia that responds to high doses of vitamin D occur during chronic phenobarbital therapy of epilepsy, as they do during phenytoin medication. Other adverse effects of phenobarbital are discussed in Chapter 17: Hypnotics and Sedatives. Plasma Drug Concentrations During long-term therapy in adults, the plasma concentration of phenobarbital averages 10 g/ml per daily dose of 1 mg/kg; in children, the value is 5 to 7 g/ml per 1 mg/kg. Although a precise relationship between therapeutic results and concentration of drug in plasma does not exist, plasma concentrations of 10 to 35 g/ml are usually recommended for control of seizures; 15 g/ml is the minimum for prophylaxis against febrile convulsions. The relationship between plasma concentration of phenobarbital and adverse effects varies with the development of tolerance. Sedation, nystagmus, and ataxia usually are absent at concentrations below 30 g/ml during long-term therapy, but adverse effects may be apparent for several days at lower concentrations when therapy is initiated or whenever the dosage is increased. Concentrations greater than 60 g/ml may be associated with marked intoxication in the nontolerant individual. Since significant behavioral toxicity may be present despite the absence of overt signs of toxicity, the tendency to maintain patients, particularly children, on excessively high doses of phenobarbital should be resisted. The plasma phenobarbital concentration should be increased above 30 to 40 g/ml only if the increment is adequately tolerated and only if it contributes significantly to control of seizures. Drug Interactions Interactions between phenobarbital and other drugs usually involve induction of the hepatic microsomal enzyme system by phenobarbital (see Chapters 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination and 17: Hypnotics and Sedatives). The variable interaction with phenytoin has been discussed above. Concentrations of phenobarbital in plasma may be elevated by as much as 40% during concurrent administration of valproic acid (see below).

Therapeutic Uses Phenobarbital is an effective agent for generalized tonic-clonic and partial seizures. Its efficacy, low toxicity, and low cost make it an important agent for these types of epilepsy. However, its sedative effects and its tendency to disturb behavior in children have reduced its use as a primary agent. Mephobarbital (MEBARAL) is N-methylphenobarbital. It is N-demethylated in the hepatic endoplasmic reticulum, and most of its activity during long-term therapy can be attributed to the accumulation of phenobarbital. Consequently, the pharmacological properties, toxicity, and clinical uses of mephobarbital are the same as those for phenobarbital. Deoxybarbiturates Primidone Primidone (MYSOLINE) is effective against partial and tonic-clonic seizures. Chemistry Primidone may be viewed as a congener of phenobarbital in which the carbonyl oxygen of the urea moiety is replaced by two hydrogen atoms:

Antiseizure Properties Primidone resembles phenobarbital in many laboratory antiseizure effects, but it is much less potent than phenobarbital in antagonizing seizures induced by pentylenetetrazol. The antiseizure effects of primidone are attributed to both the drug and its active metabolites, principally phenobarbital. Pharmacokinetic Properties Primidone is rapidly and almost completely absorbed after oral administration, although individual variability can be great. Peak concentrations in plasma usually are observed approximately 3 hours after ingestion. The plasma half-life of primidone is variable; mean values ranging from 5 to 15 hours have been reported. Primidone is converted to two active metabolites, phenobarbital and phenylethylmalonamide (PEMA). Primidone and PEMA are bound to plasma proteins to only a small extent, whereas about half of phenobarbital is so bound. The half-life of PEMA in plasma is 16 hours; both it and phenobarbital accumulate during long-term therapy. The appearance of phenobarbital in plasma may be delayed several days upon initiation of therapy with primidone. Approximately 40% of the drug is excreted unchanged in the urine; unconjugated PEMA and, to a lesser extent, phenobarbital

and its metabolites constitute the remainder. Toxicity The more common complaints are sedation, vertigo, dizziness, nausea, vomiting, ataxia, diplopia, and nystagmus. Patients also may experience an acute feeling of intoxication immediately following administration of primidone. This occurs before there is any significant metabolism of the drug. The relationship of adverse effects to dosage is complex, since they result from both the parent drug and its two active metabolites and since tolerance develops during long-term therapy. Side effects are occasionally quite severe when therapy is initiated. Serious adverse effects are relatively uncommon, but maculopapular and morbilliform rash, leukopenia, thrombocytopenia, systemic lupus erythematosus, and lymphadenopathy have been reported. Acute psychotic reactions also have occurred. Hemorrhagic disease in the neonate, megaloblastic anemia, and osteomalacia similar to those discussed previously in connection with phenytoin and phenobarbital also have been described. Plasma Drug Concentrations The relationship between the dose of primidone and the concentration of the drug and its active metabolites in plasma shows marked individual variability. During long-term therapy, the plasma concentrations of primidone and phenobarbital average 1 g/ml and 2 g/ml, respectively, per daily dose of 1 mg/kg of primidone. The plasma concentration of PEMA usually is intermediate between those of primidone and phenobarbital. There is no clear relationship between the concentrations of primidone or its metabolites in plasma and therapeutic effect. As an initial guide, the dosage of primidone may be adjusted primarily with reference to the concentration of phenobarbital, as outlined previously for administered phenobarbital, and secondarily with reference to the concentration of the parent drug. Concentrations of primidone greater than 10 g/ml usually are associated with significant toxic side effects. Drug Interactions Phenytoin has been reported to increase the conversion of primidone to phenobarbital. Other drug interactions to be anticipated are those for phenobarbital. Therapeutic Uses Primidone is useful against generalized tonic-clonic and both simple and complex partial seizures. Its use in combination with phenobarbital is illogical. Primidone is ineffective against absence seizures but is sometimes useful against myoclonic seizures in young children. Iminostilbenes Carbamazepine Carbamazepine (TEGRETOL, CARBATROL, others) was initially approved in the United States for use as an antiseizure agent in 1974. It has been employed since the 1960s for the treatment of trigeminal neuralgia. It is now considered to be a primary drug for the treatment of partial and tonicclonic seizures.

Chemistry Carbamazepine is related chemically to the tricyclic antidepressants. It is a derivative of iminostilbene with a carbamyl group at the 5 position; this moiety is essential for potent antiseizure activity. The structural formula of carbamazepine is as follows:

Pharmacological Effects Although the effects of carbamazepine in animals and human beings resemble those of phenytoin in many ways, the two drugs differ in a number of potentially important ways. Carbamazepine has been found to produce therapeutic responses in manic-depressive patients, including some in whom lithium carbonate is not effective. Further, carbamazepine has antidiuretic effects that are sometimes associated with reduced concentrations of antidiuretic hormone (ADH) in plasma. The mechanisms responsible for these effects of carbamazepine are not clearly understood. Mechanism of Action Like phenytoin, carbamazepine limits the repetitive firing of action potentials evoked by a sustained depolarization of mouse spinal cord or cortical neurons maintained in vitro (McLean and Macdonald, 1986b). This appears to be mediated by a slowing of the rate of recovery of voltageactivated Na+ channels from inactivation. These effects of carbamazepine are evident at concentrations in the range of therapeutic drug levels in CSF in human beings. The effects of carbamazepine are selective at these concentrations, in that there are no effects on spontaneous activity or on responses to iontophoretically applied GABA or glutamate. The carbamazepine metabolite, 10,11-epoxycarbamazepine, also limits sustained repetitive firing at therapeutically relevant concentrations, suggesting that this metabolite may contribute to the antiseizure efficacy of carbamazepine. Pharmacokinetic Properties The pharmacokinetic characteristics of carbamazepine are complex. They are influenced by its limited aqueous solubility and by the ability of many antiseizure drugs, including carbamazepine itself, to increase their conversion to active metabolites by hepatic oxidative enzymes. Carbamazepine is absorbed slowly and erratically after oral administration. Peak concentrations in plasma usually are observed 4 to 8 hours after oral ingestion, but may be delayed by as much as 24 hours, especially following the administration of a large dose. The drug distributes rapidly into all tissues. Binding to plasma proteins occurs to the extent of about 75%, and concentrations in the CSF appear to correspond to the concentration of free drug in plasma. The predominant pathway of metabolism in human beings involves conversion to the 10,11epoxide. This metabolite is as active as the parent compound in various animals, and its concentrations in plasma and brain may reach 50% of those of carbamazepine, especially during the concurrent administration of phenytoin or phenobarbital. The 10,11-epoxide is metabolized further

to inactive compounds, which are excreted in the urine principally as glucuronides. Carbamazepine also is inactivated by conjugation and hydroxylation. The hepatic cytochrome P450 isoform primarily responsible for biotransformation of carbamazepine is CYP3A4. Carbamazepine induces CYP2C and CYP3A and also UDP-glucuronosyltransferase, thus enhancing the metabolism of drugs degraded by these enzymes. Of particular importance in this regard are oral contraceptives, which are metabolized by CYP3A4. Toxicity Acute intoxication with carbamazepine can result in stupor or coma, hyperirritability, convulsions, and respiratory depression. During long-term therapy, the more frequent untoward effects of the drug include drowsiness, vertigo, ataxia, diplopia, and blurred vision. The frequency of seizures may increase, especially with overdosage. Other adverse effects include nausea, vomiting, serious hematological toxicity (aplastic anemia, agranulocytosis), and hypersensitivity reactions (dermatitis, eosinophilia, lymphadenopathy, splenomegaly). A late complication of therapy with carbamazepine is retention of water, with decreased osmolality and concentration of Na+ in plasma, especially in elderly patients with cardiac disease. Some tolerance develops to the neurotoxic effects of carbamazepine, and they can be minimized by gradual increase in dosage or adjustment of maintenance dosage. Various hepatic or pancreatic abnormalities have been reported during therapy with carbamazepine, most commonly a transient elevation of hepatic enzymes in plasma in 5% to 10% of patients. A transient, mild leukopenia occurs in about 10% of patients during initiation of therapy and usually resolves within the first 4 months of continued treatment; transient thrombocytopenia also has been noted. In about 2% of patients, a persistent leukopenia may develop that requires withdrawal of the drug. The initial concern that aplastic anemia might be a frequent complication of long-term therapy with carbamazepine has not materialized. In the majority of cases, the administration of multiple drugs or the presence of another underlying disease has made it difficult to establish a causal relationship. In any event, the prevalence of aplastic anemia appears to be about 1 in 200,000 patients who are treated with the drug. It is not clear whether or not monitoring of hematological function can avert the development of irreversible aplastic anemia. Although carbamazepine is carcinogenic in rats, it is not known to be carcinogenic in human beings. The induction of fetal malformations during the treatment of pregnant women is discussed below. Plasma Drug Concentrations There is no simple relationship between the dose of carbamazepine and concentrations of the drug in plasma. Therapeutic concentrations are reported to be 6 to 12 g/ml, although considerable variation occurs. Side effects referable to the CNS are frequent at concentrations above 9 g/ml. Drug Interactions Phenobarbital, phenytoin, and valproate may increase the metabolism of carbamazepine by inducing CYP3A4; carbamazepine may enhance the biotransformation of phenytoin as well as the conversion of primidone to phenobarbital. Administration of carbamazepine may lower concentrations of valproate, lamotrigine, tiagabine, and topiramate given concurrently. Carbamazepine reduces both the plasma concentration and therapeutic effect of haloperidol. The metabolism of carbamazepine may be inhibited by propoxyphene, erythromycin, cimetidine, fluoxetine, and isoniazid.

Therapeutic Uses Carbamazepine is useful in patients with generalized tonic-clonic and both simple and complex partial seizures. When it is used, renal and hepatic function and hematological parameters should be monitored. The therapeutic use of carbamazepine is discussed further at the end of this chapter. Carbamazepine was introduced by Blom in the early 1960s and is now the primary agent for treatment of trigeminal and glossopharyngeal neuralgias. It is also effective for lightning tabetic pain. Most patients with neuralgia are benefited initially, but only 70% obtain continuing relief. Adverse effects have required discontinuation of medication in 5% to 20% of patients. The therapeutic range of plasma concentrations for antiseizure therapy serves as a guideline for its use in neuralgia. Carbamazepine also has found use in the treatment of bipolar affective disorders, a use that is discussed further in Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania. Oxcarbazepine Oxcarbazepine (TRILEPTAL) (10,11-dihydro-10-oxocarbamazepine) is a keto analog of carbamazepine. In human beings, oxcarbazepine functions as a prodrug, in that it is almost immediately converted to its main active metabolite, a 10-monohydroxy derivative which is inactivated by glucuronide conjugation and eliminated by renal excretion. Its mechanism of action is similar to that of carbamazepine. Oxcarbazepine is a less potent enzyme inducer than is carbamazepine, and substitution of oxcarbazepine for carbamazepine is associated with increased levels of phenytoin and valproic acid, presumably because of reduced induction of hepatic enzymes. There is no induction by oxcarbazepine of hepatic enzymes involved in its degradation. Although oxcarbazepine does not appear to reduce the anticoagulant effect of warfarin, it does induce CYP3A and thus reduces plasma levels of steroid oral contraceptives. It has been approved for monotherapy or adjunct therapy for partial seizures in adults and as adjunctive therapy for partial seizures in children ages 4 to 16. Succinimides Ethosuximide The succinimides evolved from a systematic search for effective agents less toxic than the oxazolidinediones for the treatment of absence seizures. Ethosuximide (ZARONTIN) is a primary agent for this type of epilepsy. Structure-Activity Relationship Ethosuximide has the following structural formula:

The structure-activity relationship of the succinimides is in accord with that for other antiseizure

classes. Methsuximide (CELONTIN) and phensuximide (MILONTIN) have phenyl substituents and are more active against maximal electroshock seizures. Neither of these is now in common use. Discussion of their properties can be found in older editions of this book. Ethosuximide, with alkyl substituents, is the most active of the succinimides against seizures induced by pentylenetetrazol and is the most selective for absence seizures. Pharmacological Effects The most prominent characteristic of ethosuximide at nontoxic doses is protection against clonic motor seizures induced by pentylenetetrazol. By contrast, at nontoxic doses, ethosuximide does not inhibit tonic hindlimb extension of electroshock seizures or kindled seizures. This profile correlates with efficacy against absence seizures in human beings. Mechanism of Action Ethosuximide reduces low threshold Ca2+ currents (T currents) in thalamic neurons (Coulter et al., 1989). The thalamus plays an important role in generation of 3-Hz spike-wave rhythms typical of absence seizures (Coulter, 1998). Neurons in the thalamus exhibit a large amplitude T-current spike that underlies bursts of action potentials and likely plays an important role in thalamic oscillatory activity such as 3-Hz spike-and-wave activity. At clinically relevant concentrations, ethosuximide inhibits the T current, as evident in voltage-clamp recordings of acutely isolated, ventrobasal thalamic neurons from rats and guinea pigs. Ethosuximide reduces this current without modifying the voltage dependence of steady-state inactivation or the time course of recovery from inactivation. By contrast, succinimide derivatives with convulsant properties do not inhibit this current. Ethosuximide does not inhibit sustained repetitive firing or enhance GABA responses at clinically relevant concentrations. Current data are consistent with the idea that inhibition of T currents is the mechanism by which ethosuximide inhibits absence seizures. Pharmacokinetic Properties Absorption of ethosuximide appears to be complete, and peak concentrations occur in plasma within about 3 hours after a single oral dose. Ethosuximide is not significantly bound to plasma proteins; during long-term therapy, the concentration in the CSF is similar to that in plasma. The apparent volume of distribution averages 0.7 liter/kg. In human beings, 25% of the drug is excreted unchanged in the urine. The remainder is metabolized by hepatic microsomal enzymes, but whether or not cytochrome P450 enzymes are responsible is unknown. The major metabolite, the hydroxyethyl derivative, accounts for about 40% of administered drug, is inactive, and is excreted as such and as the glucuronide in the urine. The plasma half-life of ethosuximide averages between 40 and 50 hours in adults and approximately 30 hours in children. Toxicity The most common dose-related side effects are gastrointestinal complaints (nausea, vomiting, and anorexia) and CNS effects (drowsiness, lethargy, euphoria, dizziness, headache, and hiccough). Some tolerance to these effects develops. Parkinson-like symptoms and photophobia also have been reported. Restlessness, agitation, anxiety, aggressiveness, inability to concentrate, and other behavioral effects have occurred primarily in patients with a prior history of psychiatric disturbance. Urticaria and other skin reactions, including Stevens-Johnson syndromeas well as systemic lupus

erythematosus, eosinophilia, leukopenia, thrombocytopenia, pancytopenia, and aplastic anemia also have been attributed to the drug. The leukopenia may be transient despite continuation of the drug, but several deaths have resulted from bone-marrow depression. Renal or hepatic toxicity has not been reported. Plasma Drug Concentrations During long-term therapy, the plasma concentration of ethosuximide averages about 2 g/ml per daily dose of 1 mg/kg. A plasma concentration of 40 to 100 g/ml is required for satisfactory control of absence seizures in most patients. Therapeutic Uses Ethosuximide is effective against absence seizures but not tonic-clonic seizures and has a lower risk of adverse effects than does trimethadione, a drug formerly used to treat absence seizures (its properties are discussed in earlier editions of this book). It is an important therapeutic agent for this type of epilepsy. An initial daily dose of 250 mg in children (3 to 6 years old) and 500 mg in older children and adults is increased by 250-mg increments at weekly intervals until seizures are adequately controlled or toxicity intervenes. Divided dosage is required occasionally to prevent nausea or drowsiness associated with single daily dosage. The usual maintenance dose is 20 mg/kg per day. Increased caution is required if the daily dose exceeds 1500 mg in adults or 750 to 1000 mg in children. The use of ethosuximide and the other antiseizure agents is discussed further at the end of the chapter. Valproic Acid Valproic acid (DEPAKENE , others) was approved for use in the United States in 1978. The antiseizure properties of valproate were discovered serendipitously when it was employed as a vehicle for other compounds that were being screened for antiseizure activity. Chemistry Valproic acid (n-dipropylacetic acid) is a simple branched-chain carboxylic acid; its structural formula is as follows:

Certain other branched-chain carboxylic acids have potencies similar to that of valproic acid in antagonizing pentylenetetrazol-induced convulsions. However, increasing the number of carbon atoms to nine introduces marked sedative properties. Straight-chain acids have little or no activity. The primary amide of valproic acid is about twice as potent as the parent compound. Pharmacological Effects Valproic acid is strikingly different from phenytoin or ethosuximide in that it is effective in

inhibiting seizures in a variety of models. Like phenytoin and carbamazepine, valproate inhibits tonic hindlimb extension in maximal electroshock seizures and kindled seizures at doses without toxicity. Like ethosuximide, valproic acid inhibits clonic motor seizures induced by pentylenetetrazol at subtoxic doses. Its efficacy in diverse models parallels its efficacy against absence as well as partial and generalized tonic-clonic seizures in human beings. Mechanism of Action Valproic acid produces effects on isolated neurons similar to those of both phenytoin and ethosuximide. At therapeutically relevant concentrations, valproate inhibits sustained repetitive firing induced by depolarization of mouse cortical or spinal cord neurons (McLean and Macdonald, 1986a). The action is similar to that of both phenytoin and carbamazepine and appears to be mediated by a prolonged recovery of voltage-activated Na+ channels from inactivation. Valproic acid does not modify neuronal responses to iontophoretically applied GABA. In neurons isolated from a distinct region, the nodose ganglion, valproate also produces small reductions of the lowthreshold (T) Ca2+ current (Kelly et al., 1990) at clinically relevant but slightly higher concentrations than limit sustained repetitive firing; this effect on T currents is similar to that of ethosuximide in thalamic neurons (Coulter et al., 1989). Together, these actions of limiting sustained repetitive firing and reducing T currents may contribute to the effectiveness of valproic acid against partial and tonic-clonic seizures and absence seizures, respectively. Another potential mechanism that may contribute to valproate's antiseizure actions involves metabolism of GABA. Although valproate has no effect on responses to GABA, it does increase the amount of GABA that can be recovered from the brain after the drug is administered to animals. In vitro, valproate can stimulate the activity of the GABA synthetic enzyme, glutamic acid decarboxylase (Phillips and Fowler, 1982), and inhibit GABA degradative enzymes, GABA transaminase and succinic semialdehyde dehydrogenase (Chapman et al., 1982). Thus far it has been difficult to relate the increased GABA levels to the antiseizure activity of valproate. Pharmacokinetic Properties Valproic acid is absorbed rapidly and completely after oral administration. Peak concentration in plasma is observed in 1 to 4 hours, although this can be delayed for several hours if the drug is administered in enteric-coated tablets or is ingested with meals. The apparent volume of distribution for valproate is about 0.2 liter/kg. Its extent of binding to plasma proteins is usually about 90%, but the fraction bound is reduced as the total concentration of valproate is increased through the therapeutic range. Although concentrations of valproate in CSF suggest equilibration with free drug in the blood, there is evidence for carrier-mediated transport of valproate both into and out of the CSF. The vast majority of valproate (95%) undergoes hepatic metabolism, with less than 5% excreted unchanged. Its hepatic metabolism occurs mainly by UGT enzymes and -oxidation. Valproate is a substrate for CYP2C9 and CYP2C19, but metabolism by these enzymes accounts for a relatively minor portion of its elimination. Some of the drug's metabolites, notably 2-propyl-2-pentenoic acid and 2-propyl-4-pentenoic acid, are nearly as potent antiseizure agents as the parent compound; however, only the former (2-en-valproic acid) accumulates in plasma and brain to a potentially significant extent (see above). The half-life of valproate is approximately 15 hours but is reduced in patients taking other antiepileptic drugs. Toxicity

The most common side effects are transient gastrointestinal symptoms, including anorexia, nausea, and vomiting in about 16% of patients. Effects on the CNS include sedation, ataxia, and tremor; these symptoms occur infrequently and usually respond to a decrease in dosage. Rash, alopecia, and stimulation of appetite have been observed occasionally. Valproic acid has several effects on hepatic function. Elevation of hepatic enzymes in plasma is observed in up to 40% of patients and often occurs asymptomatically during the first several months of therapy. A rare complication is a fulminant hepatitis that is frequently fatal (see Dreifuss et al., 1989). Pathological examination reveals a microvesicular steatosis without evidence of inflammation or hypersensitivity reaction. Children below 2 years of age with other medical conditions who were given multiple antiseizure agents were especially likely to suffer fatal hepatic injury. At the other extreme, there were no deaths reported for patients over the age of 10 years who received only valproate. Acute pancreatitis and hyperammonemia also have been frequently associated with the use of valproic acid. Plasma Drug Concentrations The concentration of valproate in plasma that is associated with therapeutic effects is approximately 30 to 100 g/ml. However, the correlation between this concentration and efficacy is poor. There appears to be a threshold at about 30 to 50 g/ml; this is the concentration at which binding sites on plasma albumin begin to become saturated. Drug Interactions Valproate primarily inhibits drugs metabolized by CYP2C9 including phenytoin and phenobarbital. Valproate also inhibits UGT and thus inhibits the metabolism of lamotrigine and lorazepam. A high proportion of valproate is bound to albumin, and the high molar concentrations of valproate in the clinical setting result in valproate's displacing phenytoin and other drugs from albumin. With respect to phenytoin in particular, valproate's inhibition of the drug's metabolism is countered by displacement of phenytoin from albumin. The concurrent administration of valproate and clonazepam has been associated with the development of absence status epilepticus; however, this complication appears to be rare. Therapeutic Uses Valproate is effective in the treatment of absence, myoclonic, partial, and tonic-clonic seizures. The initial daily dose usually is 15 mg/kg, and this is increased at weekly intervals by 5 to 10 mg/kg per day to a maximum daily dose of 60 mg/kg. Divided doses should be given when the total daily dose exceeds 250 mg. The therapeutic uses of valproate in epilepsy are discussed further at the end of this chapter. Benzodiazepines The benzodiazepines are employed clinically primarily as sedative-antianxiety drugs; their pharmacology is presented in detail in Chapters 17: Hypnotics and Sedatives and 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders. Discussion in this chapter is limited to consideration of their usefulness in the therapy of the epilepsies. A large number of benzodiazepines have broad antiseizure properties, but only clonazepam (KLONOPIN) and clorazepate (TRANXENE-SD; others) have been approved in the United States for the long-term treatment of certain types of seizures. Diazepam (VALIUM, DIASTAT; others) and lorazepam (ATIVAN) have well-defined roles in the management of status epilepticus. The structures of the benzodiazepines are shown in Chapter 17: Hypnotics and Sedatives.

Antiseizure Properties In animals, prevention of pentylenetetrazol-induced seizures by the benzodiazepines is much more prominent than is their modification of the maximal electroshock seizure pattern. Clonazepam is unusually potent in antagonizing the effects of pentylenetetrazol, but it is almost without action on seizures induced by maximal electroshock. Benzodiazepines, including clonazepam, suppress the spread of kindled seizures and generalized convulsions produced by stimulation of the amygdala, but do not abolish the abnormal discharge at the site of stimulation. Mechanism of Action The antiseizure actions of the benzodiazepines, as well as other effects that occur at nonsedating doses, result in large part from their ability to enhance GABA-mediated synaptic inhibition. Molecular cloning and study of recombinant receptors have demonstrated that the benzodiazepine receptor is an integral part of the GABAA receptor (see Chapter 17: Hypnotics and Sedatives). At therapeutically relevant concentrations, benzodiazepines act at subsets of GABAA receptors and increase the frequency, but not duration, of openings at GABA-activated chloride channels (Twyman et al., 1989). At higher concentrations, diazepam and many other benzodiazepines can reduce sustained high-frequency firing of neurons, similar to the effects of phenytoin, carbamazepine, and valproate. Although these concentrations correspond to those achieved in patients during treatment of status epilepticus with diazepam, they are considerably higher than those associated with antiseizure or anxiolytic effects in ambulatory patients. Pharmacokinetic Properties Benzodiazepines are well absorbed after oral administration, and concentrations in plasma are usually maximal within 1 to 4 hours. After intravenous administration, they are redistributed in a manner typical of that for highly lipid-soluble agents (see Chapter 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination). Central effects develop promptly, but wane rapidly as the drugs move to other tissues. Diazepam is redistributed especially rapidly, with a half-life of redistribution of about 1 hour. The extent of binding of benzodiazepines to plasma proteins correlates with lipid solubility, ranging from approximately 99% for diazepam to about 85% for clonazepam (see Appendix II). The major metabolite of diazepam, N-desmethyl-diazepam, is somewhat less active than the parent drug and may behave as a partial agonist. This metabolite also is produced by the rapid decarboxylation of clorazepate following its ingestion. Both diazepam and N-desmethyl-diazepam are slowly hydroxylated to other active metabolites, such as oxazepam. The half-life of diazepam in plasma is between 1 and 2 days, while that of N-desmethyl-diazepam is about 60 hours. Clonazepam is metabolized principally by reduction of the nitro group to produce inactive 7-amino derivatives. Less than 1% of the drug is recovered unchanged in the urine. The half-life of clonazepam in plasma is about 1 day. Lorazepam is metabolized chiefly by conjugation with glucuronic acid; its half-life in plasma is about 14 hours. Toxicity The principal side effects of long-term oral therapy with clonazepam are drowsiness and lethargy. These occur in about 50% of patients initially, but tolerance often develops with continued administration. Muscular incoordination and ataxia are less frequent. Although these symptoms usually can be kept to tolerable levels by reducing the dosage or the rate at which it is increased, they sometimes force discontinuation of the drug. Other side effects include hypotonia, dysarthria,

and dizziness. Behavioral disturbances, especially in children, can be very troublesome; these include aggression, hyperactivity, irritability, and difficulty in concentration. Both anorexia and hyperphagia have been reported. Increased salivary and bronchial secretions may cause difficulties in children. Seizures are sometimes exacerbated, and status epilepticus may be precipitated if the drug is discontinued abruptly. Other aspects of the toxicity of the benzodiazepines are discussed in Chapter 17: Hypnotics and Sedatives. Cardiovascular and respiratory depression may occur after the intravenous administration of diazepam, clonazepam, or lorazepam, particularly if other antiseizure agents or central depressants have been administered previously. Plasma Drug Concentrations Because tolerance affects the relationship between drug concentration and drug antiseizure effect, plasma concentrations of benzodiazepines are of limited value. Therapeutic Uses Clonazepam is useful in the therapy of absence seizures as well as myoclonic seizures in children. However, tolerance to its antiseizure effects usually develops after 1 to 6 months of administration, after which some patients no longer will respond to clonazepam at any dosage. The initial dose of clonazepam for adults should not exceed 1.5 mg per day and for children is 0.01 to 0.03 mg/kg per day. The dose-dependent side effects are reduced if two or three divided doses are given each day. The dose may be increased every 3 days in amounts of 0.25 to 0.5 mg per day in children and 0.5 to 1 mg per day in adults. The maximal recommended dose is 20 mg per day for adults and 0.2 mg/kg per day for children. While diazepam is an effective agent for treatment of status epilepticus, its short duration of action is a disadvantage, leading to the frequent use of intravenous phenytoin in combination with diazepam. Diazepam is administered intravenously and at a rate of no more than 5 mg per minute. The usual dose for adults is 5 to 10 mg, as required; this may be repeated at intervals of 10 to 15 minutes, up to a maximal dose of 30 mg. If necessary, this regimen can be repeated in 2 to 4 hours, but no more than 100 mg should be administered in a 24-hour period. Although diazepam is not useful as an oral agent for the treatment of seizure disorders, clorazepate is effective in combination with certain other drugs in the treatment of partial seizures. The maximal inital dose of clorazepate is 22.5 mg per day in three portions for adults and 15 mg per day in two doses in children. Clorazepate is not recommended for children under the age of 9. Other Antiseizure Agents Gabapentin Gabapentin (NEURONTIN) is an antiseizure drug that was approved by the United States Food and Drug Administration in 1993. The chemical structure of gabapentin is a GABA molecule covalently bound to a lipophilic cyclohexane ring. Gabapentin was designed to be a centrally active GABA agonist, its high lipid solubility aimed at facilitating its transfer across the blood-brain barrier. The structure of gabapentin is shown below:

Pharmacological Effects and Mechanisms of Action Gabapentin inhibits tonic hindlimb extension in the electroshock seizure model. Interestingly, gabapentin also inhibits clonic seizures induced by pentylenetetrazol. Its efficacy in both these tests parallels that of valproic acid and distinguishes it from phenytoin and carbamazepine. The anticonvulsant mechanism of action of gabapentin is unknown. Despite its design as a GABA agonist, gabapentin does not mimic GABA when iontophoretically applied to neurons in primary culture. Gabapentin may promote nonvesicular release of GABA through a poorly understood mechanism (Honmou et al., 1995). Gabapentin does bind a protein in cortical membranes with an 2+ amino acid sequence identical to that of the 2 subunit of the L type of voltage-sensitive Ca 2+ 2+ channel. Yet, gabapentin does not affect Ca currents of the T, N, or L types of Ca channels in dorsal root ganglion cells (Macdonald and Greenfield, 1997). Gabapentin has not been found consistently to reduce sustained repetitive firing of action potentials (Macdonald and Kelly, 1993). Pharmacokinetics Gabapentin is absorbed after oral administration and is not metabolized in human beings. It is excreted unchanged, mainly in the urine. Its half-life, when it is used as monotherapy, is 5 to 9 hours. Concurrent administration of gabapentin does not affect the plasma concentrations of phenytoin, carbamazepine, phenobarbital, or valproate. Therapeutic Uses Gabapentin is approved by the FDA for treating partial seizures, with and without secondary generalization, in adults when used in addition to other antiseizure drugs. Double-blind, placebocontrolled trials of patients with refractory partial seizures demonstrated that addition of gabapentin to other antiseizure drugs was superior to placebo. The median seizure decrease induced by gabapentin was approximately 27% compared with 12% for placebo. A double-blind study of gabapentin (900 or 1800 mg/day) monotherapy disclosed that gabapentin was similar in efficacy to carbamazepine (600 mg/day). Gabapentin also is being used for migraine, chronic pain, and bipolar disorder. Gabapentin usually is effective in doses of 900 to 1800 mg daily in three doses. Therapy usually is begun with a low dose (300 mg once on the first day), and the dose is increased in daily increments of 300 mg until an effective dose is reached. Toxicity The most common adverse effects of gabapentin are somnolence, dizziness, ataxia, and fatigue. These effects usually are mild to moderate in severity but resolve within two weeks of onset during continued treatment. Overall, gabapentin is well tolerated. Lamotrigine

Lamotrigine (LAMICTAL) is a phenyltriazine derivative initially developed as an antifolate agent based upon the incorrect idea that reducing folate would effectively combat seizures. Structureactivity studies indicate that its effectiveness as an antiseizure drug is unrelated to its antifolate properties (Macdonald and Greenfield, 1997). It was approved by the Food and Drug Administration in 1994. Its chemical structure is:

Pharmacological Effects and Mechanisms of Action Lamotrigine suppresses tonic hindlimb extension in the maximal electroshock model and partial and secondarily generalized seizures in the kindling model but does not inhibit clonic motor seizures induced by pentylenetetrazol. Lamotrigine blocks sustained repetitive firing of mouse spinal cord neurons and delays the recovery from inactivation of recombinant Na+ channels, mechanisms similar to those of phenytoin and carbamazepine (Xie et al., 1995). This may well explain lamotrigine's actions on partial and secondarily generalized seizures. However, as mentioned below, lamotrigine is effective against a broader spectrum of seizures than phenytoin and carbamazepine, suggesting that lamotrigine may have actions in addition to regulating recovery from inactivation of Na+ channels. The mechanisms underlying its broad spectrum of actions are incompletely understood. One possibility involves lamotrigine's inhibition of glutamate release in rat cortical slices treated with veratridine, a Na+ channel activator, raising the possibility that lamotrigine inhibits synaptic release of glutamate by acting at Na+ channels themselves. Pharmacokinetics Lamotrigine is completely absorbed from the gastrointestinal tract and is metabolized primarily by glucuronidation. The plasma half-life of a single dose is 24 to 35 hours. Administration of phenytoin, carbamazepine, phenobarbital, or primidone reduces the half-life of lamotrigine to approximately 15 hours and reduces plasma concentrations of lamotrigine. Conversely, addition of valproate markedly increases plasma concentrations of lamotrigine, likely by inhibiting glucuronidation. Addition of lamotrigine to valproic acid produces a reduction of valproate concentrations by approximately 25% over a few weeks. Concurrent use of lamotrigine and carbamazepine is associated with increases of the 10,11-epoxide of carbamazepine and clinical toxicity in some patients. Therapeutic Use Lamotrigine is useful for monotherapy and add-on therapy of partial and secondarily generalized tonic-clonic seizures in adults and Lennox-Gastaut syndrome in both children and adults. A doubleblind comparison of lamotrigine and carbamazepine monotherapy in newly diagnosed partial or generalized tonic-clonic seizures disclosed similar efficacy for the two drugs, but lamotrigine was better tolerated (Brodie et al., 1995). A double-blind, placebo-controlled trial of addition of lamotrigine to existing antiseizure drugs demonstrated effectiveness of lamotrigine against tonicclonic seizures and drop attacks in children with the Lennox-Gastaut syndrome (Motte et al., 1997).

Lennox-Gastaut syndrome is a disorder of childhood characterized by multiple seizure types, mental retardation, and refractoriness to antiseizure medication. There also is emerging evidence that lamotrigine is effective against juvenile myoclonic epilepsy and absence epilepsy. Patients who already are taking a hepatic enzyme-inducing antiseizure drug (such as carbamazepine, phenytoin, phenobarbital, or primidone, but not valproate) should be given lamotrigine initially at 50 mg per day for 2 weeks. The dose is increased to 50 mg twice per day for 2 weeks and then increased in increments of 100 mg/day each week up to a maintenance dose of 300 to 500 mg/day divided into two doses. For patients taking valproate in addition to an enzyme-inducing antiseizure drug, the initial dose should be 25 mg every other day for 2 weeks, followed by an increase to 25 mg/day for 2 weeks; the dose then can be increased by 25 to 50 mg/day every 1 to 2 weeks up to a maintenance dose of 100 to 150 mg/day divided into two doses. Toxicity The most common adverse effects are dizziness, ataxia, blurred or double vision, nausea, vomiting, and rash when lamotrigine was added to another antiseizure drug. A few cases of Stevens-Johnson syndrome and disseminated intravascular coagulation have been reported. Acetazolamide Acetazolamide, the prototype for the carbonic anhydrase inhibitors, is discussed in Chapter 29: Diuretics. Its antiseizure actions are discussed in previous editions of this textbook. Although it is sometimes effective against absence seizures, its usefulness is limited by the rapid development of tolerance. Adverse effects are minimal when it is used in moderate dosage for limited periods. Felbamate Felbamate (FELBATOL) is a dicarbamate which was approved by the Food and Drug Administration for partial seizures in 1993. An association between felbamate and aplastic anemia in at least ten cases resulted in a recommendation by the Food and Drug Administration and the manufacturer for the immediate withdrawal of most patients from treatment with this drug. The structure of felbamate is shown below:

Felbamate is effective in both the maximal electroshock and pentylenetetrazol seizure models. Clinically relevant concentrations of felbamate inhibit NMDA-evoked responses and potentiate GABA-evoked responses in whole-cell, voltage-clamp recordings of cultured rat hippocampal neurons (Rho et al., 1994). This dual action on excitatory and inhibitory transmitter responses may contribute to the wide spectrum of action of the drug in seizure models. An active control, randomized, double-blind protocol demonstrated the efficacy of felbamate in patients with poorly controlled partial and secondarily generalized seizures (Sachdeo et al., 1992). Felbamate also was found to be efficacious against seizures in patients with Lennox-Gastaut

syndrome (The Felbamate Study Group in Lennox-Gastaut Syndrome, 1993). The clinical efficacy of this compound, which inhibited responses to NMDA and potentiated those to GABA, underscores the potential value of additional antiseizure agents with similar mechanisms of action. Levetiracetam Levetiracetam (KEPPRA) is a pyrrolidine, the racemically pure S-enantiomer of -ethyl-2-oxo-1pyrrolidineacetamide, which was approved by the Food and Drug Administration in 1999 for treating partial seizures in adults when used in addition to other drugs. Its structure is:

Pharmacological Effects and Mechanism of Action Levetiracetam exhibits a novel pharmacological profile insofar as it inhibits partial and secondarily generalized tonic-clonic seizures in the kindling model yet is ineffective against maximum electroshock- and pentylenetetrazol-induced seizures, findings consistent with effectiveness against partial and secondarily generalized tonic-clonic seizures clinically. The mechanism by which levetiracetam exerts these antiseizure effects is unknown. No evidence for an action on voltage+ gated Na channels or either GABA-or glutamate-mediated synaptic transmission has emerged. A stereoselective binding site has been identified in rat brain membranes, but the molecular identity of this site remains obscure. Pharmacokinetics Levetiracetam is rapidly and almost completely absorbed after oral administration. Ninety-five percent of the drug and its metabolite are excreted in the urine, 65% of which is unchanged drug; 24% of the drug is metabolized by hydrolysis of the acetamide group. It neither induces nor is a high-affinity substrate for cytochrome P450 isoforms or glucuronidation enzymes and thus is devoid of known interactions with other antiseizure drugs, oral contraceptives, or anticoagulants. Therapeutic Use A double-blind, placebo-controlled trial of adults with refractory partial seizures demonstrated that addition of levetiracetam to other antiseizure medications was superior to placebo. Its efficacy for monotherapy is being investigated. Toxicity The drug is well tolerated. The most frequently reported adverse effects are somnolence, asthenia, and dizziness. Tiagabine Tiagabine (GABITRIL) is a derivative of nipecotic acid that was approved by the Food and Drug

Administration in 1998 for treating partial seizures in adults when used in addition to other drugs. Its structure is as follows:

Pharmacological Effects and Mechanism of Action Tiagabine inhibits the GABA transporter, GAT-1, and thereby reduces GABA uptake into neurons and glia. In CA1 neurons of the hippocampus, tiagabine increases the duration of inhibitory synaptic currents, findings consistent with prolonging the effect of GABA at inhibitory synapses through reducing its reuptake by GAT-1. Tiagabine inhibits maximum electroshock seizures and both limbic and secondarily generalized tonic-clonic seizures in the kindling model, results suggestive of efficacy against partial and tonic-clonic seizures clinically. Pharmacokinetics Tiagabine is rapidly absorbed after oral administration, extensively bound to proteins, and metabolized mainly in the liver and predominantly by CYP3A. Its half-life is about 8 hours but is shortened by 2 to 3 hours when coadministered with hepatic enzyme-inducing drugs such as phenobarbital, phenytoin, or carbamazepine. Therapeutic Use Double-blind, placebo-controlled trials have established tiagabine's efficacy as add-on therapy of refractory partial seizures with or without secondary generalization. Its efficacy for monotherapy for this indication has not yet been established. Toxicity The principal adverse effects include dizziness, somnolence, and tremor; they appear to be mild to moderate in severity, and appear shortly after drug initiation. The fact that tiagabine and other drugs thought to enhance effects of synaptically released GABA can facilitate spike-and-wave discharges in animal models of absence seizures raises the possibility that tiagabine may be contraindicated in patients with generalized absence epilepsy. Topiramate Topiramate (TOPAMAX) is a sulfamate-substituted monosaccharide that was approved by the Food and Drug Administration in 1996 for partial seizures in adults when used in addition to other drugs. Its structure is as follows:

Pharmacological Effects and Mechanisms of Action Topiramate reduces voltage-gated Na currents in cerebellar granule cells and may act on the inactivated state of the channel in a manner similar to that of phenytoin. In addition, topiramate enhances postsynaptic GABAA-receptor currents and also limits activation of the AMPA-kainatesubtype(s) of glutamate receptor. Topiramate also is a weak carbonic anhydrase inhibitor. Topiramate inhibits maximal electroshock and pentylenetetrazol-induced seizures as well as partial and secondarily generalized tonic-clonic seizures in the kindling model, findings predictive of a broad spectrum of antiseizure actions clinically. Pharmacokinetics Topiramate is rapidly absorbed after oral administration and is mainly excreted unchanged in the urine. The remainder undergoes metabolism by hydroxylation, hydrolysis, and glucuronidation with no one metabolite accounting for more than 5% of an oral dose. Its half-life is about a day. Reduced estradiol plasma concentrations occur with concurrent topiramate, suggesting the need for higher doses of oral contraceptives when coadministered with topiramate. Therapeutic Use Double-blind, placebo-controlled studies established the efficacy of topiramate in both adults and children with refractory partial seizures with or without secondary generalized tonic-clonic seizures. Topiramate also was found to be significantly more effective than placebo against both drop attacks and tonic-clonic seizures in patients with Lennox-Gastaut syndrome and against tonic-clonic and myoclonic seizures in adults and children with primary generalized epilepsy. A pilot study suggests that topiramate may be effective against infantile spasms. Toxicity Topiramate is well tolerated. The most common adverse effects are somnolence, fatigue, weight loss, and nervousness. Zonisamide Zonisamide (ZONEGRAN ) is a sulfonamide derivative that was approved by the Food and Drug Administration in 2000 for partial seizures in adults when used in addition to other drugs. Its structure is as follows:
+

Pharmacological Effects and Mechanism of Action Zonisamide inhibits the T-type Ca currents. In addition, zonisamide inhibits the sustained, repetitive firing of spinal cord neurons, presumably by prolonging the inactivated state of voltage+ gated Na channels in a manner similar to actions of phenytoin and carbamazepine. Zonisamide inhibits tonic hindlimb extension evoked by maximal electroshock and inhibits both partial and secondarily generalized seizures in the kindling model, results predictive of clinical effectiveness against partial and secondarily generalized tonic-clonic seizures. Zonisamide does not inhibit minimal clonic seizures induced by pentylenetetrazol, suggesting that the drug will not be effective clinically against myoclonic seizures. Zonisamide's inhibition of T-type Ca2+ currents suggests that it may be effective against absence seizures, yet its effects in absence models such as the lethargic mouse or the absence epileptic rat of Strasbourg have not been reported. Pharmacokinetics Zonisamide is almost completely absorbed after oral administration, has a long half-life (about 63 hours), and is about 40% bound to plasma protein. Approximately 85% of an oral dose is excreted in the urine, principally as unmetabolized zonisamide and a glucuronide of sulfamoylacetyl phenol, which is a product of metabolism by CYP3A4. Phenobarbital, phenytoin, and carbamazepine decrease the plasma concentration/dose ratio of zonisamide, whereas lamotrigine increases this ratio. Conversely, zonisamide has little effect on the plasma concentrations of other antiseizure drugs. Therapeutic Use Double-blind, placebo-controlled studies of patients with refractory partial seizures demonstrated that addition of zonisamide to other drugs was superior to placebo. Additional studies of zonisamide have been initiated in absence seizures, infantile spasms, and Lennox-Gastaut syndrome, but only largely anecdotal data are currently available. Toxicity Overall, zonisamide is well tolerated. The most common adverse effects include somnolence, ataxia, anorexia, nervousness, and fatigue. Approximately 1% of individuals develop renal calculi during treatment with zonisamide; the mechanism of this effect is obscure. General Principles and Choice of Drugs for the Therapy of the Epilepsies Early diagnosis and treatment of seizure disorders with a single appropriate agent offers the best prospect of achieving prolonged seizure-free periods with the lowest risk of toxicity. An attempt should be made to determine the cause of the epilepsy with the hope of discovering a correctable lesion, either structural or metabolic. The efficacy of antiseizure drugs has been assessed in clinical trials on the basis of seizure type, not epilepsy syndrome type, and thus seizure type determines drug selection. The drugs commonly used for distinct seizure types are listed in Table 211. The efficacy combined with the unwanted effects of a given drug determine which particular drug is
2+

optimal for a given patient. The first issue that arises is whether or not and when to initiate treatment. For example, it may not be necessary to initiate antiseizure therapy after an isolated tonic-clonic seizure in a healthy young adult who lacks a family history of epilepsy and who has a normal neurological exam, a normal EEG, and a normal brain magnetic resonance imaging (MRI) scan. That is, the odds of seizure recurrence in the next year (15%) approximate the risk of a drug reaction sufficiently severe to warrant discontinuation of medication (Bazil and Pedley, 1998). Alternatively, a similar seizure occurring in an individual with a positive family history of epilepsy, an abnormal neurological exam, an abnormal EEG, and an abnormal MRI carries a risk of recurrence approximating 60%, odds that favor initiation of therapy. Unless extenuating circumstances such as status epilepticus exist, medication should be initiated with a single drug. Initial dosage usually is that expected to provide a plasma drug concentration during the plateau state at least in the lower portion of the range associated with clinical efficacy. To minimize dose-related adverse effects, therapy with many drugs is initiated at reduced dosage. Dosage is increased at appropriate intervals, as required for control of seizures or as limited by toxicity, and such adjustment is preferably assisted by monitoring of drug concentrations in plasma. Compliance with a properly selected, single drug in maximal tolerated dosage results in complete control of seizures in approximately 50% of patients. If a seizure occurs despite optimal drug levels, the physician should assess the presence of potential precipitating factors such as sleep deprivation, a concurrent febrile illness, or drugs; drugs might consist of large amounts of caffeine or even overthe-counter medications, which can include drugs that can lower the seizure threshold. If compliance has been confirmed yet seizures persist, another drug should be substituted. Unless serious adverse effects of the drug dictate otherwise, dosage always should be reduced gradually when a drug is being discontinued to minimize risk of seizure recurrence. In the case of partial seizures in adults, the diversity of available drugs permits selection of a second drug that acts by a distinct mechanism. Smith et al. (1987) found that 55% of such patients could be managed satisfactorily on a second single drug, yet others report that only 9% to 11% of patients with complex partial seizures failing an initial drug achieve complete seizure control with a second single drug (Schmidt and Richter, 1986; Dasheiff et al., 1986). In the event that therapy with a second single drug also is inadequate, many physicians resort to treatment with two drugs simultaneously. This decision should not be taken lightly, because most patients obtain optimal seizure control with fewest unwanted effects when taking a single drug. Nonetheless, some patients will not be controlled adequately without the use of two or more antiseizure agents simultaneously. No properly controlled studies have compared systematically one particular drug combination with another. It seems wise to select two drugs that act by distinct mechanisms (e.g., one that promotes Na + channel inactivation and another that enhances GABAmediated synaptic inhibition). Additional issues that warrant careful consideration are the unwanted effects of each drug and the potential drug interactions. As specified in Table 212, many of these drugs induce expression of cytochrome P450 enzymes and thereby impact the metabolism of themselves and/or other drugs. Overall, the more recently developed antiseizure drugs present fewer problems with respect to drug interactions. If a patient fails two drugs in monotherapy, the odds that polytherapy will provide complete control are small. Alternative measures such as epilepsy surgery should be considered. Essential to optimal management of epilepsy is the filling out of a seizure chart by the patient or a relative. Frequent visits to the physician or seizure clinic may be necessary early in the period of treatment, since hematological and other possible side effects may require consideration of a change

in medication. Long-term follow-up with neurological examinations and possibly EEG and neuroimaging studies is appropriate. Most crucial for successful management is regularity of medication, since faulty compliance is the most frequent cause for failure of therapy with antiseizure drugs. Measurement of plasma drug concentration at appropriate intervals greatly facilitates the initial adjustment of dosage for individual differences in drug elimination and the subsequent adjustment of dosage to minimize dose-related adverse effects without sacrifice of seizure control. Periodic monitoring during maintenance therapy can detect failure of the patient to take the medication as prescribed. Knowledge of plasma drug concentration can be especially helpful during multiple-drug therapy. If toxicity occurs, monitoring helps to identify the particular drug(s) responsible, and if pharmacokinetic drug interaction occurs, monitoring can guide readjustment of dosage. Duration of Therapy In an attempt to provide guidelines for withdrawal of antiseizure drugs, Shinnar et al. (1994) prospectively studied 264 children in whom antiseizure drugs were discontinued after a mean seizure-free interval of 2.9 years. Children were followed for a mean of 58 months to assess seizure recurrence. Seizures recurred in 36% of children. Factors associated with an increased risk of recurrence included a positive family history of epilepsy, presence of slowing on EEG prior to withdrawal, onset of epilepsy after age 12 (compared with younger ages), atypical febrile seizures, and certain epileptic syndromes such as juvenile myoclonic epilepsy. In a prospective study, the treatment of patients with generalized or partial seizures was stopped after 2 seizure-free years; only patients who had been treated with a single drug (phenytoin, carbamazepine, or valproate) were included (Callaghan et al., 1988). The overall rate of relapse (within 3 years) was approximately 33% in both children and adults. Although only 92 patients were studied, the risk of relapse was apparently greatest for patients with complex partial seizures or those who had a persistently abnormal EEG. Although these and other results are encouraging, it is not yet possible to provide clear guidelines for the selection of patients for withdrawal from therapy. Such decisions must be made on an individual basis, weighing both the medical and psychosocial consequences of recurrence of seizures against the potential toxicity associated with prolonged therapy. If a decision to withdraw antiseizure drugs is made, such withdrawal should be done gradually over a period of months. The risk of status epilepticus is increased with abrupt cessation of therapy. Simple and Complex Partial and Secondarily Generalized Tonic-Clonic Seizures The efficacy and toxicity of carbamazepine, phenobarbital, phenytoin, and primidone for treatment of partial and secondarily generalized tonic-clonic seizures in adults have been examined in a double-blind prospective study (Mattson et al., 1985). A subsequent double-blind prospective study compared carbamazepine with valproate (Mattson et al., 1992). Carbamazepine and phenytoin were the most effective overall for single-drug therapy of partial or generalized tonic-clonic seizures. The choice between carbamazepine and phenytoin required assessment of toxic effects of drugs. Primidone was associated with greater incidence of toxicity early in the course of therapy, including nausea, dizziness, ataxia, and somnolence. Decreased libido and impotence were associated with all four drugs (carbamazepine 13%, phenobarbital 16%, phenytoin 11%, and primidone 22%), but significantly more commonly with primidone. The study comparing carbamazepine with valproate revealed that carbamazepine provided superior control of complex partial seizures. With respect to

adverse effects, carbamazepine was more commonly associated with skin rash, but valproate was more commonly associated with tremor and weight gain. Overall, the data demonstrated that carbamazepine and phenytoin are preferable for treatment of partial seizures, but phenobarbital, valproic acid, and primidone are efficacious. A double-blind comparison of lamotrigine and carbamazepine disclosed similar efficacy of the two drugs, but lamotrigine was better tolerated (Brodie et al., 1995). Lamotrigine is used for monotherapy of partial and secondarily generalized tonic-clonic seizures. Multiple drugs recently were approved for add-on therapy of these seizures, including gabapentin, levetiracetam, tiagabine, topiramate, and zonisamide. Control of secondarily generalized tonic-clonic seizures did not differ significantly with carbamazepine, phenobarbital, phenytoin, or primidone (Mattson et al., 1985). Valproate was as effective as carbamazepine for control of secondarily generalized tonic-clonic seizures (Mattson et al., 1992). Since secondarily generalized tonic-clonic seizures usually coexist with partial seizures, carbamazepine, phenytoin, and lamotrigine are the first-line drugs for these conditions. Absence Seizures The best current data indicate that ethosuximide and valproate are equally effective in the treatment of absence seizures (see Mikati and Browne, 1988). Between 50% and 75% of newly diagnosed patients can be rendered free of seizures. In the event that tonic-clonic seizures are present or emerge during therapy, valproate is the agent of first choice. Emerging evidence suggests that lamotrigine is effective for absence seizures (Bazil and Pedley, 1998). Myoclonic Seizures Valproic acid is the drug of choice for myoclonic seizures in the syndrome of juvenile myoclonic epilepsy, in which myoclonic seizures often coexist with tonic-clonic and also absence seizures. Monotherapy with lamotrigine may be effective in some patients with juvenile myoclonic epilepsy in whom valproic acid proves unsatisfactory (Bazil and Pedley, 1998). Febrile Convulsions Two percent to 4% of children experience a convulsion associated with a febrile illness. From 25% to 33% of these children will have another febrile convulsion. Only 2% to 3% become epileptic in later years. This is a sixfold increase in risk compared with the general population. Several factors are associated with an increased risk of developing epilepsy: preexisting neurological disorder or developmental delay, a family history of epilepsy, or a complicated febrile seizure (i.e., the febrile seizure lasted more than 15 minutes, was one-sided, or was followed by a second seizure in the same day). If all of these risk factors are present, the risk of developing epilepsy is only 10%. Concern regarding the increased risk of developing epilepsy or other neurological sequelae led many physicians to prescribe antiseizure drugs prophylactically after a febrile seizure. Uncertainties regarding the efficacy of prophylaxis for reducing epilepsy combined with substantial side effects of phenobarbital prophylaxis (Farwell et al., 1990) argue against the use of chronic therapy for prophylactic purposes (Freeman, 1992). For children at high risk of developing recurrent febrile seizures and epilepsy, rectally administered diazepam at the time of fever may prevent recurrent seizures and avoid side effects of chronic therapy. Seizures in Infants and Young Children Infantile spasms with hypsarrhythmia are refractory to the usual antiseizure agents; corticotropin or

the adrenocorticosteroids are commonly used. A randomized study found vigabatrin ( -vinyl GABA) to be efficacious in comparison to placebo (Appleton et al., 1999). Constriction of visual fields has been reported in some adults treated with vigabatrin (Miller et al., 1999). The drug has not been approved by the U.S. Food and Drug Administration but is available in other countries. The Lennox-Gastaut syndrome is a severe form of epilepsy which usually begins in childhood and is characterized by cognitive impairments and multiple types of seizures including tonic-clonic, tonic, atonic, myoclonic, and atypical absence seizures. Addition of lamotrigine to other antiseizure drugs resulted in improved seizure control in comparison to placebo in a double-blind trial (Motte et al., 1997), demonstrating lamotrigine to be an effective and well-tolerated drug for this treatmentresistant form of epilepsy. Felbamate also was found to be effective for seizures in this syndrome, but the occasional occurrence of aplastic anemia has limited its use. Status Epilepticus and Other Convulsive Emergencies Status epilepticus is a neurological emergency. Mortality for adults approximates 20% (Lowenstein and Alldredge, 1998). The goal of treatment is rapid termination of behavioral and electrical seizure activity; the longer the episode of status epilepticus is untreated, the more difficult it is to control and the risk of permanent brain damage increases. Critical to the management is a clear plan, prompt treatment with effective drugs in adequate doses, and attention to hypoventilation and hypotension. Since hypoventilation may result from high doses of drugs used for treatment, it may be necessary to assist respiration temporarily. Drugs should be administered by intravenous route only. Because of slow and unreliable absorption, the intramuscular route has no place in treatment of status epilepticus. To assess the optimal initial drug regimen, a double-blind, multicenter trial compared four intravenous treatments: diazepam followed by phenytoin; lorazepam; phenobarbital; and phenytoin alone (Treiman et al., 1998). The treatments were shown to have similar efficacies, in that success rates ranged from 44% to 65%, but lorazepam alone was significantly better than phenytoin alone. No significant differences were found with respect to recurrences or adverse reactions. Antiseizure Therapy and Pregnancy Use of antiseizure drugs has diverse implications of great importance for the health of women, issues considered in guidelines articulated by the American Academy of Neurology (Morrell, 1998). These issues include interactions with oral contraceptives, potential teratogenic effects, and effects on vitamin K metabolism in pregnant women. The effectiveness of oral contraceptives appears to be reduced by concomitant use of antiseizure drugs. The failure rate of oral contraceptives is 3.1/100 years in women receiving antiseizure drugs compared to a rate of 0.7/100 years in nonepileptic women. One attractive explanation of the increased failure rate is the increased rate of oral contraceptive metabolism caused by antiseizure drugs that induce hepatic enzymes (see Table 212); particular caution is needed with any antiseizure drug that induces CYP3A4. The apparent teratogenic effects of antiseizure drugs add to the deleterious consequences of oral contraceptive failure. Epidemiological evidence suggests that antiseizure drugs have teratogenic effects. Infants of epileptic mothers are at twofold greater risk of major congenital malformations than offspring of nonepileptic mothers (4% to 8% compared to 2% to 4%). These malformations include congenital heart defects, neural tube defects, and others. Inferring causality from the associations found in large epidemiological studies with many uncontrolled variables can be hazardous, but a causal role for antiseizure drugs is suggested by association of congenital defects with higher concentrations of

a drug or with polytherapy compared to monotherapy. Phenytoin, carbamazepine, valproate, and phenobarbital all have been associated with teratogenic effects. Whether or not the recently developed antiseizure drugs also will be associated with teratogenic effects awaits clinical experience with these agents. One consideration for a woman with epilepsy who wishes to become pregnant is a trial free of antiseizure drug; monotherapy with careful attention to drug levels is another alternative. Polytherapy with toxic levels should be avoided. Folate supplementation (0.4 mg/day) has been recommended by the United States Public Health Service for all women of childbearing age to reduce the likelihood of neural tube defects, and this is appropriate for epileptic women as well. Antiseizure drugs that induce cytochrome P450 enzymes have been associated with vitamin K deficiency in the newborn, which can result in a coagulopathy and intracerebral hemorrhage in the neonate. Treatment with vitamin K1, 10 mg/day during the last month of gestation, has been recommended for prophylaxis. Prospectus Improved therapies for epilepsy are likely to emerge from several lines of investigation over the next decade: (1) Clinical experience and additional clinical trials with the recently approved antiseizure drugs should optimize their utilization for diverse forms of epilepsy. (2) Increased insight into genetic, cellular, and molecular mechanisms of epilepsy emerging from basic investigations should lead to the development of drugs acting by mechanisms distinct from currently available medications. (3) Insight into cellular and molecular mechanisms of epileptogenesis emerging from studies of animal models should lead to pharmacological prophylaxis of individuals at high risk of developing epilepsy. (4) Pharmacogenomic investigations should optimize selection of antiseizure drugs efficacious in a given individual and permit identification of individuals at high risk for devastating, idiosyncratic drug effects. For further discussion of the epilepsies and convulsive disorders, see Chapter 348 in Harrison's Principles of Internal Medicine, 16th ed., McGraw-Hill, New York, 2005.

Chapter 22. Treatment of Central Nervous System Degenerative Disorders


Overview The neurodegenerative diseases include common and debilitating disorders such as Parkinson's disease, Alzheimer's disease, Huntington's disease, and amyotrophic lateral sclerosis (ALS). Although the clinical and neuropathological aspects of these disorders are distinct, their unifying feature is that each disorder has a characteristic pattern of neuronal degeneration in anatomically or functionally related regions. Presently available pharmacological treatments for the neurodegenerative disorders are symptomatic and do not alter the course or progression of the underlying disease. The most effective symptomatic therapies are those for Parkinson's disease; a large number of agents from several different pharmacological classes can be used, and, when skillfully applied, these can have a dramatic impact on life span and functional ability. The treatments available for Alzheimer's disease, Huntington's disease, and ALS are less satisfactory but still can make an important

contribution to patient welfare. This chapter reviews current therapeutic agents for treatment of the symptoms of neurodegenerative diseases and introduces the reader to research aimed at developing therapeutic agents that alter the course of neurodegenerative diseases by preventing neuronal death or stimulating neuronal recovery. Related material concerning the serotonergic effects of some of the therapeutic agents employed for Parkinson's disease can be found in Chapter 11: 5-Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists, and additional information concerning cholinergic agents that are used in treatment of Alzheimer's disease can be found in Chapters 7: Muscarinic Receptor Agonists and Antagonists and 8: Anticholinesterase Agents.

Treatment of Central Nervous System Degenerative Disorders: Introduction Neurodegenerative disorders are characterized by progressive and irreversible loss of neurons from specific regions of the brain. Prototypical neurodegenerative disorders include Parkinson's disease (PD) and Huntington's disease (HD), where loss of neurons from structures of the basal ganglia results in abnormalities in the control of movement; Alzheimer's disease (AD), where the loss of hippocampal and cortical neurons leads to impairment of memory and cognitive ability; and amyotrophic lateral sclerosis (ALS), where muscular weakness results from the degeneration of spinal, bulbar, and cortical motor neurons. As a group, these disorders are relatively common and represent a substantial medical and societal problem. They are primarily disorders of later life, developing in individuals who are neurologically normal, although childhood-onset forms of each of the disorders are recognized. PD is observed in more than 1% of individuals over the age of 65 (Tanner, 1992), whereas AD affects as many as 10% of the same population (Evans et al., 1989). HD, which is a genetically determined autosomal dominant disorder, is less frequent in the population as a whole but affects 50% of each generation in families carrying the gene. ALS also is relatively rare but often leads rapidly to disability and death (Kurtzke, 1982). At present, the pharmacological therapy of neurodegenerative disorders is limited to symptomatic treatments that do not alter the course of the underlying disease. Symptomatic treatment for PD, where the neurochemical deficit produced by the disease is well defined, is in general relatively successful, and a number of effective agents are available (Lang and Lozano, 1998; Standaert and Stern, 1993). The available treatments for AD, HD, and ALS are much more limited in effectiveness, and the need for new strategies is particularly acute. Selective Vulnerability and Neuroprotective Strategies Selective Vulnerability The most striking feature of this group of disorders is the exquisite specificity of the disease processes for particular types of neurons. For example, in PD there is extensive destruction of the dopaminergic neurons of the substantia nigra, while neurons in the cortex and many other areas of the brain are unaffected (Gibb, 1992; Fearnley and Lees, 1994). In contrast, neural injury in AD is most severe in the hippocampus and neocortex, and even within the cortex, the loss of neurons is not uniform but varies dramatically in different functional regions (Arnold et al., 1991). Even more striking is the observation that, in HD, the mutant gene responsible for the disorder is expressed throughout the brain and in many other organs, yet the pathological changes are largely restricted to the neostriatum (Vonsattel et al., 1985; Landwehrmeyer et al., 1995). In ALS, there is loss of spinal motor neurons and the cortical neurons that provide their descending input (Tandan and Bradley,

1985). The diversity of these patterns of neural degeneration has led to the proposal that the process of neural injury must be viewed as the interaction of genetic and environmental influences with the intrinsic physiological characteristics of the affected populations of neurons. These intrinsic factors may include susceptibility to excitotoxic injury, regional variation in capacity for oxidative metabolism, and the production of toxic free radicals as products of cellular metabolism (Figure 22 1). The factors that convey selective vulnerability may prove to be important targets for neuroprotective agents to slow the progression of neurodegenerative disorders. Figure 221. Mechanisms of Selective Neuronal Vulnerability in Neurodegenerative Diseases.

Genetics It has been long suspected that genetics plays an important role in the etiology of neurodegenerative disorders, and recent discoveries have begun to shed light on some mechanisms responsible. HD is transmitted by autosomal dominant inheritance, and the molecular nature of the genetic defect has been identified (discussed below). Most cases of PD, AD, or ALS are sporadic, but families with a high incidence of each of these diseases have been identified, and these studies have begun to yield important clues to the pathogenesis of the disorders. In the case of PD, mutations in three different proteins can lead to autosomal dominant forms of the disease: alpha-synuclein, an abundant synaptic protein; parkin, a ubiquitin hydrolase; and UCHL1, which also participates in ubiquitinmediated degradation of proteins in the brain (Duvoisin, 1998; Golbe, 1999; Kitada et al., 1998). In AD, mutations in the genes coding for the amyloid precursor protein (APP) and proteins known as the presenilins (which may be involved in APP processing) lead to inherited forms of the disease (Selkoe, 1998). Mutations in the gene coding for copper-zinc superoxide dismutase (SOD1) account for about 2% of the cases of adult-onset ALS (Cudkowicz and Brown, 1996). Although these mutations are rare, their importance extends beyond the families that carry them, because they point to pathways and mechanisms that also may underlie the more common, sporadic cases of these diseases. Genetically determined cases of PD, AD, and ALS are infrequent, but it is likely that an individual's genetic background has an important role in determining the probability of acquiring these diseases. Recent studies of AD have revealed the first of what are likely to be many genetic risk factors for

neurodegenerative disorders, in the form of apolipoprotein E (apo E). This protein, well known to be involved in transport of cholesterol and lipids in blood, is found in four distinct isoforms. Although all of the isoforms carry out their primary role in lipid metabolism equally well, individuals who are homozygous for the apo E 4 allele ("4/4") have a much higher lifetime risk of AD than do those homozygous for the apo E 2 allele ("2/2"). The mechanism by which the apo E 4 protein increases the risk of AD is not known, but a secondary function of the protein in metabolism of APP has been suggested (Roses, 1997). Environmental Triggers Infectious agents, environmental toxins, and acquired brain injury have been proposed to have a role in the etiology of neurodegenerative disorders. The role of infection is best documented in the numerous cases of PD that developed following the epidemic of encephalitis lethargica (Von Economo's encephalitis) in the early part of the twentieth century. Most contemporary cases of PD are not preceded by encephalitis, and there is no convincing evidence for an infectious contribution to HD, AD, or ALS. Traumatic brain injury has been suggested as a trigger for neurodegenerative disorders, and in the case of AD there is some evidence to support this view (Cummings et al., 1998). At least one toxin, N-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP; discussed in Energy, Metabolism, and Aging), can induce a condition closely resembling PD, but evidence for the widespread occurrence of this or a similar toxin in the environment is lacking (Tanner and Langston, 1990). Excitotoxicity The term excitotoxicity was coined by Olney (1969) to describe the neural injury that results from the presence of excess glutamate in the brain. Glutamate is used as a neurotransmitter by many different neural systems and is believed to mediate most excitatory synaptic transmission in the mammalian brain (see Chapter 12: Neurotransmission and the Central Nervous System). Although glutamate is required for normal brain function, the presence of excessive amounts of glutamate can lead to excitotoxic cell death (Lipton and Rosenberg, 1994). The destructive effects of glutamate are mediated by glutamate receptors, particularly those of the N-methyl-D-aspartate (NMDA) type. Unlike other glutamate-gated ion channels, which primarily regulate the flow of Na +, activated NMDA receptor-channels allow an influx of Ca2+, which in excess can activate a variety of potentially destructive processes. The activity of NMDA receptor-channels is regulated not only by the concentration of glutamate in the synaptic space but also by a voltage-dependent blockade of the channel by Mg2+; thus, entry of Ca 2+ into neurons through NMDA receptor-channels requires binding of glutamate to NMDA receptors as well as depolarization of the neuron (e.g., by the activity of glutamate at non-NMDA receptors), which relieves the blockade of NMDA channels by extracellular Mg2+. Excitotoxic injury is thought to make an important contribution to the neural death that occurs in acute processes such as stroke and head trauma (Choi and Rothman, 1990). In the chronic neurodegenerative disorders, the role of excitotoxicity is less certain, but it is thought that regional and cellular differences in susceptibility to excitotoxic injuryconveyed, for example, by differences in types of glutamate receptorsmay contribute to selective vulnerability (Young, 1993). Energy, Metabolism, and Aging The excitotoxic hypothesis provides a link between selective patterns of neuronal injury, the effects of aging, and observations on the metabolic capacities of neurons (Beal et al., 1993). Since the ability of Mg2+ to block the NMDA receptor-channel is dependent on the membrane potential, disturbances that impair the metabolic capacity of neurons will tend to relieve Mg2+ blockade and

predispose to excitotoxic injury. The capacity of neurons for oxidative metabolism declines progressively with age, perhaps in part because of a progressive accumulation of mutations in the mitochondrial genome (Wallace, 1992). Patients with PD exhibit several defects in energy metabolism that are even greater than expected for their age, most notably a reduction in the function of complex I of the mitochondrial electron transport chain (Schapira et al., 1990). Additional evidence for the role of metabolic defects in the etiology of neural degeneration comes from the study of patients who inadvertently self-administered MPTP, a "designer drug" that resulted in symptoms of severe and irreversible parkinsonism (Ballard et al., 1985). Subsequent studies have shown that a metabolite of MPTP induces degeneration of neurons similar to that observed in idiopathic PD and that its mechanism of action appears to be related to an ability to impair mitochondrial energy metabolism in dopaminergic neurons (Przedborski and Jackson-Lewis, 1998). In rodents, neural degeneration similar to that observed in HD can be produced either by direct administration of large doses of NMDA receptor agonists or by more chronic administration of inhibitors of mitochondrial oxidative metabolism, suggesting that disturbances of energy metabolism may underlie the selective pathology of HD as well (Beal et al., 1986, 1993). Oxidative Stress Although neurons depend on oxidative metabolism for survival, a consequence of this process is the production of reactive compounds such as hydrogen peroxide and oxyradicals (Cohen and Werner, 1994). Unchecked, these reactive species can lead to DNA damage, peroxidation of membrane lipids, and neuronal death. Several mechanisms serve to limit this oxidative stress, including the presence of reducing compounds such as ascorbate and glutathione and enzymatic mechanisms such as superoxide dismutase, which catalyzes the reduction of superoxide radicals. Oxidative stress also may be relieved by aminosteroid agents that serve as free radical scavengers. In PD, attention has been focused on the possibility that oxidative stress induced by the metabolism of dopamine may underlie the selective vulnerability of dopaminergic neurons (Jenner, 1998). The primary catabolic pathway of dopamine to 3,4-dihydroxyphenylacetic acid (DOPAC) is catalyzed by monoamine oxidase (MAO) and generates hydrogen peroxide. Hydrogen peroxide, in the presence of ferrous ion, which is relatively abundant in the basal ganglia, may generate hydroxyl free radicals (the Fenton reaction, Figure 222; Olanow, 1990). If the protective mechanisms are inadequate because of inherited or acquired deficiency, the oxyradicals could cause degeneration of dopaminergic neurons. This hypothesis has led to several proposals for therapeutic agents to retard neuronal loss in PD. Two candidates, the free radical scavenger tocopherol (vitamin E) and the MAO inhibitor selegiline (discussed in Selegiline), have been tested in a large-scale clinical trial, but neither was shown to have a substantial neuroprotective effect (The Parkinson Study Group, 1993). Figure 222. Production of Free Radicals by the Metabolism of Dopamine (DA). DA is converted by monamine oxidase (MAO) and aldehyde dehydrogenase to 3,4-dihydroxyphenylacetic acid (DOPAC), producing hydrogen peroxide (H2O2). In the presence of ferrous iron, H2O2 undergoes spontaneous conversion, forming a hydroxyl free radical (the Fenton reaction).

Parkinson's Disease Clinical Overview Parkinsonism is a clinical syndrome comprising four cardinal features: bradykinesia (slowness and poverty of movement), muscular rigidity, resting tremor (which usually abates during voluntary movement), and an impairment of postural balance leading to disturbances of gait and falling. The most common cause of parkinsonism is idiopathic PD, first described by James Parkinson in 1817 as paralysis agitans, or the "shaking palsy." The pathological hallmark of PD is a loss of the pigmented, dopaminergic neurons of the substantia nigra pars compacta, with the appearance of intracellular inclusions known as Lewy bodies (Gibb, 1992; Fearnley and Lees, 1994). Progressive loss of dopamine neurons is a feature of normal aging; however, most people do not lose the 70% to 80% of dopaminergic neurons required to cause symptomatic PD. Without treatment, PD progresses over 5 to 10 years to a rigid, akinetic state in which patients are incapable of caring for themselves. Death frequently results from complications of immobility, including aspiration pneumonia or pulmonary embolism. The availability of effective pharmacological treatment has altered radically the prognosis of PD; in most cases, good functional mobility can be maintained for many years, and the life expectancy of adequately treated patients is substantially increased (Diamond et al., 1987). It is important to recognize that several disorders other than PD also may produce parkinsonism, including some relatively rare neurodegenerative disorders, stroke, and intoxication with dopamine receptorblocking drugs. Drugs in common clinical use that may cause parkinsonism include antipsychotics such as haloperidol and thorazine (see Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) and antiemetics such as prochlorperazine and metoclopramide (see Chapter 38: Prokinetic Agents, Antiemetics, and Agents Used in Irritable Bowel Syndrome). Although a complete discussion of the clinical diagnostic approach to parkinsonism exceeds the scope of this chapter, the distinction between PD and other causes of parkinsonism is important, because parkinsonism arising from other causes usually is refractory to all forms of treatment. Parkinson's Disease: Pathophysiology The primary deficit in PD is a loss of the neurons in the substantia nigra pars compacta that provide dopaminergic innervation to the striatum (caudate and putamen). The current understanding of the pathophysiology of PD can be traced to the classical neurochemical investigations in the 1950s and 1960s, in which a more than 80% reduction in the striatal dopamine content was demonstrated. This parallelled the loss of neurons from the substantia nigra, suggesting that replacement of dopamine could restore function (Cotzias et al., 1969; Hornykiewicz, 1973). These fundamental observations led to an extensive investigative effort to understand the metabolism and actions of dopamine and to learn how a deficit in dopamine gives rise to the clinical features of PD. This effort led to a current model of the function of the basal ganglia that admittedly is incomplete but is still useful.

Biosynthesis of Dopamine Dopamine, a catecholamine, is synthesized in the terminals of dopaminergic neurons from tyrosine, which is transported across the bloodbrain barrier by an active process (Figure 223 and Figure 224). The rate-limiting step in the synthesis of dopamine is the conversion of L-tyrosine to Ldihydroxyphenylalanine (L-DOPA), catalyzed by the enzyme tyrosine hydroxylase which is present within catecholaminergic neurons. L-DOPA is converted rapidly to dopamine by aromatic L-amino acid decarboxylase. In dopaminergic nerve terminals, dopamine is taken up into vesicles by a transporter protein; this process is blocked by reserpine, which leads to depletion of dopamine. Release of dopamine from nerve terminals occurs through exocytosis of presynaptic vesicles, a process that is triggered by depolarization leading to entry of Ca2+. Once dopamine is in the synaptic cleft, its actions may be terminated by reuptake through a membrane carrier protein, a process antagonized by drugs such as cocaine. Alternatively, dopamine can be degraded by the sequential actions of MAO and catechol-O-methyltransferase (COMT) to yield two metabolic products, 3,4-dihydroxyphenylacetic acid (DOPAC) and 3-methoxy-4-hydroxyphenylacetic acid (HVA). In human beings, HVA is the primary product of the metabolism of dopamine (Cooper et al., 1996). Figure 223. Dopaminergic Terminal. Dopamine (DA) is synthesized within neuronal terminals from the precursor tyrosine by the sequential actions of the enzymes tyrosine hydroxylase, producing the intermediary Ldihydroxyphenylalanine (DOPA), and aromatic L-amino acid decarboxylase. In the terminal, dopamine is transported into storage vesicles by a transporter protein (T) associated with the vesicular membrane. Release, triggered by 2+ depolarization and entry of Ca , allows dopamine to act on postsynaptic dopamine receptors (DAR); as discussed in the text, several distinct types of dopamine receptors are present in the brain, and the differential actions of dopamine on postsynaptic targets bearing different types of dopamine receptors have important implications for the function of neural circuits. The actions of dopamine are terminated by the sequential actions of the enzymes catechol-Omethyltransferase (COMT) and monoamine oxidase (MAO), or by reuptake of dopamine into the terminal.

Figure 224. Metabolism of Levodopa (L-Dopa). AD, aldehyde dehydrogenase; COMT, catechol-O-methyltransferase; D H, dopamine -hydroxylase; AAD, aromatic L-amino acid decarboxylase; MAO, monoamine oxidase.

Dopamine Receptors The actions of dopamine in the brain are mediated by a family of dopamine receptor proteins (Figure 225). Two types of dopamine receptors were identified in the mammalian brain using pharmacological techniques: D1 receptors, which stimulate the synthesis of the intracellular second messenger cyclic AMP, and D2 receptors, which inhibit cyclic AMP synthesis as well as suppress Ca2+ currents and activate receptor-operated K+ currents. Application of molecular genetics to the study of dopamine receptors has revealed a more complex receptor situation than originally envisioned. At present, five distinct dopamine receptors are known to exist (see Jarvie and Caron, 1993, and Chapter 12: Neurotransmission and the Central Nervous System). The dopamine receptors share several structural features, including the presence of seven -helical segments capable of spanning the cell membrane. This structure identifies the dopamine receptors as members of the larger superfamily of seven-transmembrane-region receptor proteins, which includes other important neural receptors such as -adrenergic receptors, olfactory receptors, and the visual pigment rhodopsin. All members of this superfamily act through guanine nucleotidebinding proteins (G proteins; see Chapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect). Figure 225. Distribution and Characteristics of Dopamine Receptors. SNpc, substantia nigra pars compacta; cAMP, cyclic AMP; , voltage.

The five dopamine receptors can be divided into two groups on the basis of their pharmacological and structural properties (Figure 225). The D1 and D 5 proteins have a long intracellular carboxyterminal tail and are members of the pharmacologically defined D1 class; they stimulate the formation of cyclic AMP and phosphatidyl inositol hydrolysis. The D2, D3, and D4 receptors share a large third intracellular loop and are of the D2 class. They decrease cyclic AMP formation and modulate K+ and Ca2+ currents. Each of the five dopamine receptor proteins has a distinct anatomical pattern of expression in the brain. The D1 and D2 proteins are abundant in the striatum and are the most important receptor sites with regard to the causes and treatment of PD. The D4 and D5 proteins are largely extrastriatal, while D3 expression is low in the caudate and putamen but more abundant in the nucleus accumbens and olfactory tubercle. Neural Mechanism of Parkinsonism Considerable effort has been devoted to understanding how the loss of dopaminergic input to the neurons of the neostriatum gives rise to the clinical features of PD (for review see Albin et al., 1989; Mink and Thach, 1993; and Wichmann and DeLong, 1993). The basal ganglia can be viewed as a modulatory side loop that regulates the flow of information from the cerebral cortex to the motor neurons of the spinal cord (Figure 226). The neostriatum is the principal input structure of the basal ganglia and receives excitatory glutamatergic input from many areas of the cortex. The majority of neurons within the striatum are projection neurons that innervate other basal ganglia structures. A small but important subgroup of striatal neurons are interneurons that interconnect neurons within the striatum but do not project beyond its borders. Acetylcholine as well as neuropeptides are used as transmitters by the striatal interneurons. The outflow of the striatum proceeds along two distinct routes, identified as the direct and indirect pathways. The direct pathway is formed by neurons in the striatum that project directly to the output stages of the basal ganglia, the substantia nigra pars reticulata (SNpr) and the medial globus pallidus (MGP); these in turn relay to the ventroanterior and ventrolateral thalamus, which provides excitatory input to the cortex. The neurotransmitter of both links of the direct pathway is gamma-aminobutyric acid

(GABA), which is inhibitory, so that the net effect of stimulation of the direct pathway at the level of the striatum is to increase the excitatory outflow from the thalamus to the cortex. The indirect pathway is composed of striatal neurons that project to the lateral globus pallidus (LGP). This structure in turn innervates the subthalamic nucleus (STN), which provides outflow to the SNpr and MGP output stage. As in the direct pathway, the first two linksthe projections from striatum to LGP and LGP to STNuse the inhibitory transmitter GABA; however, the final linkthe projection from STN to SNpr and MGPis an excitatory glutamatergic pathway. Thus the net effect of stimulating the indirect pathway at the level of the striatum is to reduce the excitatory outflow from the thalamus to the cerebral cortex. Figure 226. Schematic Wiring Diagram of the Basal Ganglia. The neostriatum (STR) is the principal input structure of the basal ganglia and receives excitatory, glutamatergic input from many areas of cerebral cortex. Outflow from the STR proceeds along two routes. The direct pathway, from the STR to the substantia nigra pars reticulata (SNpr) and medial globus pallidus (MGP), uses the inhibitory transmitter GABA. The indirect pathway, from the STR through the lateral globus pallidus (LGP) and the subthalamic nucleus (STN) to the SNpr and MGP consists of two inhibitory, GABAergic links and one excitatory, glutamatergic projection. The substantia nigra pars compacta (SNpc) provides dopaminergic innervation to the striatal neurons giving rise to both the direct and indirect pathways, and regulates the relative activity of these two paths. The SNpr and MGP are the output structures of the basal ganglia, and provide feedback to the cerebral cortex through the ventroanterior and ventrolateral nuclei of the thalamus (VA/VL).

The key feature of this model of basal ganglia function, which accounts for the symptoms observed in PD as a result of loss of dopaminergic neurons, is the differential effect of dopamine on the direct

and indirect pathways (Figure 227). The dopaminergic neurons of the substantia nigra pars compacta (SNpc) innervate all parts of the striatum; however, the target striatal neurons express distinct types of dopamine receptors. The striatal neurons giving rise to the direct pathway express primarily the excitatory D1 dopamine receptor protein, while the striatal neurons forming the indirect pathway express primarily the inhibitory D2 type. Thus, dopamine released in the striatum tends to increase the activity of the direct pathway and reduce the activity of the indirect pathway, whereas the depletion that occurs in PD has the opposite effect. The net effect of the reduced dopaminergic input in PD is to increase markedly the inhibitory outflow from the SNpr and MGP to the thalamus and reduce excitation of the motor cortex. Figure 227. The Basal Ganglia in Parkinson's Disease (PD). The primary defect is destruction of the dopaminergic neurons of the SNpc. The striatal neurons that form the direct pathway from the STR to the SNpr and MGP express primarily the excitatory D1 dopamine receptor, while the striatal neurons that project to the LGP and form the indirect pathway express the inhibitory D2 dopamine receptor. Thus, loss of the dopaminergic input to the striatum has a differential effect on the two outflow pathways; the direct pathway to the SNpr and MGP is less active, while the activity in the indirect pathway is increased. The net effect is that neurons in the SNpr and MGP become more active. This leads to increased inhibition of the VA/VL thalamus and reduced excitatory input to the cortex. Thin line, normal pathway activity; thick line, increased pathway activity in PD; dashed line, reduced pathway activity in PD. (See legend to Figure 226 for definitions of anatomical abbreviations.)

This model of basal ganglia function has important implications for the rational design and use of pharmacological agents in PD. First, it suggests that, to restore the balance of the system through stimulation of dopamine receptors, the complementary effect of actions at both D1 and D2 receptors,

as well as the possibility of adverse effects that may be mediated by D3, D4, or D5 receptors, must be considered. Second, it explains why replacement of dopamine is not the only approach to the treatment of PD. Drugs that inhibit cholinergic receptors long have been used for treatment of parkinsonism. Although their mechanisms of action are not completely understood, it seems likely that their effect is mediated at the level of the striatal projection neurons which normally receive cholinergic input from striatal cholinergic interneurons. No clinically useful drugs for parkinsonism are presently available based on actions through GABA and glutamate receptors, even though both have crucial roles in the circuitry of the basal ganglia. However, they represent a promising avenue for drug development (Greenamyre and O'Brien, 1991). Treatment of Parkinson's Disease Commonly used medications for the treatment of PD are summarized in Table 221. Levodopa Levodopa (L-DOPA, LARODOPA, DOPAR, L-3,4- dihydroxyphenylalanine), the metabolic precursor of dopamine, is the single most effective agent in the treatment of PD. Levodopa is itself largely inert; its therapeutic as well as adverse effects result from the decarboxylation of levodopa to dopamine. When administered orally, levodopa is rapidly absorbed from the small bowel by an active transport system for aromatic amino acids. Concentrations of the drug in plasma usually peak between 0.5 and 2 hours after an oral dose. The half-life in plasma is short (1 to 3 hours). The rate and extent of absorption of levodopa is dependent upon the rate of gastric emptying, the pH of gastric juice, and the length of time the drug is exposed to the degradative enzymes of the gastric and intestinal mucosa. Competition for absorption sites in the small bowel from dietary amino acids also may have a marked effect on the absorption of levodopa; administration of levodopa with meals delays absorption and reduces peak plasma concentrations. Entry of the drug into the central nervous system (CNS) across the bloodbrain barrier also is an active process mediated by a carrier of aromatic amino acids, and competition between dietary protein and levodopa may occur at this level. In the brain, levodopa is converted to dopamine by decarboxylation, primarily within the presynaptic terminals of dopaminergic neurons in the striatum. The dopamine produced is responsible for the therapeutic effectiveness of the drug in PD; after release, it is either transported back into dopaminergic terminals by the presynaptic uptake mechanism or metabolized by the actions of MAO and COMT (Mouradian and Chase, 1994). In modern practice, levodopa is almost always administered in combination with a peripherally acting inhibitor of aromatic L-amino acid decarboxylase, such as carbidopa or benserazide. If levodopa is administered alone, the drug is largely decarboxylated by enzymes in the intestinal mucosa and other peripheral sites, so that relatively little unchanged drug reaches the cerebral circulation and probably less than 1% penetrates the CNS. In addition, dopamine released into the circulation by peripheral conversion of levodopa produces undesirable effects, particularly nausea. Inhibition of peripheral decarboxylase markedly increases the fraction of administered levodopa that remains unmetabolized and available to cross the bloodbrain barrier and reduces the incidence of gastrointestinal side effects. In most individuals, a daily dose of 75 mg of carbidopa is sufficient to prevent the development of nausea. For this reason, the most commonly prescribed form of carbidopa/levodopa (SINEMET, ATAMET ) is the 25/100 form, containing 25 mg of carbidopa and 100 mg of levodopa. With this formulation, dosage schedules of three or more tablets daily provide acceptable inhibition of decarboxylase in most individuals. Occasionally, individuals will require larger doses of carbidopa to minimize gastrointestinal side effects, and administration of supplemental carbidopa (LODOSYN) alone may be beneficial.

Levodopa therapy can have a dramatic effect on all the signs and symptoms of PD. Early in the course of the disease, the degree of improvement in tremor, rigidity, and bradykinesia may be nearly complete. In early PD, the duration of the beneficial effects of levodopa may exceed the plasma lifetime of the drug, suggesting that the nigrostriatal dopamine system retains some capacity to store and release dopamine. A principal limitation of the long-term use of levodopa therapy is that, with time, this apparent "buffering" capacity is lost, and the patient's motor state may fluctuate dramatically with each dose of levodopa. A common problem is the development of the "wearing off " phenomenon; each dose of levodopa effectively improves mobility for a period of time, perhaps 1 to 2 hours, but rigidity and akinesia rapidly return at the end of the dosing interval. Increasing the dose and frequency of administration can improve this situation, but this often is limited by development of dyskinesias, excessive and abnormal involuntary movements. Dyskinesias are observed most often when the plasma levodopa concentration is high, although, in some individuals, dyskinesias or dystonia may be triggered when the level is rising or falling. These movements can be as uncomfortable and disabling as the rigidity and akinesia of PD. In the later stages of PD, patients may fluctuate rapidly between being "off," having no beneficial effects from their medications, and being "on" but with disabling dyskinesias, a situation called the "on/off phenomenon." Recent evidence has indicated that the induction of on/off phenomena and dyskinesias may be the result of an active process of adaptation to variations in brain and plasma levodopa levels. This process of adaptation is apparently complex, involving not only alterations in the expression of dopamine receptor proteins but also downstream changes in the postsynaptic striatal neurons, including modification of NMDA glutamate receptors (Mouradian and Chase, 1994; Chase, 1998). When levodopa levels are maintained at a constant level by intravenous infusion, dyskinesias and fluctuations are greatly reduced, and the clinical improvement is maintained for up to several days after returning to oral levodopa dosing (Mouradian et al., 1990; Chase et al., 1994). A sustainedrelease formulation consisting of carbidopa/levodopa in an erodable wax matrix (SINEMET CR) has been marketed in an attempt to produce more stable plasma levodopa levels than can be obtained with oral administration of standard carbidopa/levodopa formulations. This formulation is helpful in some cases, but the absorption of the sustained-release formulation is not entirely predictable. Another technique used to overcome the on/off phenomenon is to sum the total daily dose of carbidopa/levodopa and give equal amounts every 2 hours rather than every 4 or 6 hours. An important unanswered question regarding the use of levodopa in PD is whether this medication alters the course of the underlying disease or merely modifies the symptoms (Agid et al., 1998). Two aspects of levodopa treatment and the outcome of PD are of concern. First, it has been suggested that, if the production of free radicals as a result of dopamine metabolism contributes to the death of nigrostriatal neurons, the addition of levodopa might actually accelerate the process (Olanow, 1990), although no convincing evidence for such an effect has yet been obtained. Second, it is well established that the undesirable on/off fluctuations and wearing off phenomena are observed almost exclusively in patients treated with levodopa, but it is not known if delaying treatment with levodopa will delay the appearance of these effects (Fahn, 1999). In view of these uncertainties, most practitioners have adopted a pragmatic approach, using levodopa only when the symptoms of PD cause functional impairment. In addition to motor fluctuations and nausea, several other adverse effects may be observed with levodopa treatment. A common and troubling adverse effect is the induction of hallucinations and confusion; these effects are particularly common in the elderly and in those with preexisting cognitive dysfunction and often limit the ability to treat parkinsonian symptoms adequately. Conventional antipsychotic agents, such as the phenothiazines, are effective against levodopainduced psychosis but may cause marked worsening of parkinsonism, probably through actions at

the D2 dopamine receptor. A recent approach has been to use the "atypical" antipsychotic agents, which are effective in the treatment of psychosis but do not cause or worsen parkinsonism (see Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). The most effective of these are clozapine and quetiapine (Friedman and Factor, 2000). Peripheral decarboxylation of levodopa and release of dopamine into the circulation may activate vascular dopamine receptors and produce orthostatic hypotension. The actions of dopamine at and -adrenergic receptors may induce cardiac arrhythmias, especially in patients with preexisting conduction disturbances. Administration of levodopa with nonspecific inhibitors of MAO, such as phenelzine and tranylcypromine, markedly accentuates the actions of levodopa and may precipitate life-threatening hypertensive crisis and hyperpyrexia; nonspecific MAO inhibitors always should be discontinued at least 14 days before levodopa is administered (note that this prohibition does not include the MAO-B subtype-specific inhibitor selegiline, which, as discussed below, often is administered safely in combination with levodopa). Abrupt withdrawal of levodopa or other dopaminergic medications may precipitate the neuroleptic malignant syndrome more commonly observed after treatment with dopamine antagonists (Keyser and Rodnitzky, 1991). Dopamine-Receptor Agonists An alternative to levodopa is the use of drugs that are direct agonists of striatal dopamine receptors, an approach that offers several potential advantages. Since enzymatic conversion of these drugs is not required for activity, they do not depend on the functional capacities of the nigrostriatal neurons and thus might be more effective than levodopa in late PD. In addition, dopamine-receptor agonists potentially are more selective in their actions; unlike levodopa, which leads to activation of all dopamine receptor types throughout the brain, agonists may exhibit relative selectivity for different subtypes of dopamine receptors. Most of the dopamine-receptor agonists in current clinical use have durations of action substantially longer than that of levodopa and often are useful in the management of dose-related fluctuations in motor state. Finally, if the hypothesis that free radical formation as a result of dopamine metabolism contributes to neuronal death is correct, then dopamine-receptor agonists have the potential to modify the course of the disease by reducing endogenous release of dopamine as well as the need for exogenous levodopa (Goetz, 1990). Four dopamine-receptor agonists are available for treatment of PD: two older agents bromocriptine (PARLODEL ) and pergolide (PERMAX); and two newer, more selective compounds, ropinirole (REQUIP) and pramipexole (MIRAPEX). Bromocriptine and pergolide are both ergot derivatives and share a similar spectrum of therapeutic actions and adverse effects. Bromocriptine is a strong agonist of the D2 class of dopamine receptors and a partial antagonist of the D1 receptors, while pergolide is an agonist of both classes. Ropinirole and pramipexole (Figure 228) have selective activity at D2 class sites (specifically, at the D2 and D 3 receptor proteins) and little or no activity at D1 class sites. All four of the drugs are well absorbed orally, and have similar therapeutic actions. Like levodopa, they can relieve the clinical symptoms of PD. The duration of action of the dopamine agonists often is longer than that of levodopa, and they are particularly effective in the treatment of patients who have developed on/off phenomena. All four also may produce hallucinosis or confusion, similar to that observed with levodopa, and may worsen orthostatic hypotension. Figure 228. Structures of Selective Dopamine D2-Receptor Agonists.

The principal distinction between the newer, more selective agents and the older ergot derivatives is in their tolerability and speed of titration. Initial treatment with bromocriptine or pergolide may cause profound hypertension, so they should be initiated at low dosage. The ergot derivatives also often induce nausea and fatigue with initial treatment. Symptoms usually are transient, but they require slow upward adjustment of the dose, over a period of weeks to months. Ropinirole and pramipexole can be initiated more quickly, achieving therapeutically useful doses in a week or less. They generally cause less gastrointestinal disturbance than do the ergot derivatives, but they can produce nausea and fatigue. Although these properties already have led to widespread adoption of the newer drugs in the United States, there are as yet few data on the effects of long-term treatment. One curious adverse effect of pramipexole and ropinirole reported to date is that some individuals treated with these drugs develop a troubling sleep disorder, with sudden attacks of sleep during ordinary daytime activities (Frucht et al., 1999). This effect seems to be uncommon, but it is prudent to advise patients of this possibility and switch to another treatment if these symptoms occur. The introduction of pramipexole and ropinirole has led to a substantial change in the clinical use of dopamine agonists in PD. Because these selective agonists are well tolerated, they are increasingly used as initial treatment for PD rather than as adjuncts to levodopa. This change has been driven by two factors: (1) the belief that because of their longer duration of action, dopamine agonists may be less likely than levodopa to induce on/off effects and dyskinesias, and (2) the concern that levodopa may contribute to oxidative stress, thereby accelerating loss of dopaminergic neurons. It is important to recognize that, while these concerns are well founded in laboratory experiments, there is at present only limited direct evidence for either of these effects on patients. Two large, controlled clinical trials comparing levodopa to pramipexole or ropinirole as initial treatment of PD recently have revealed a reduced rate of motor fluctuation in patients treated with these agonists, and if confirmed by additional studies, these findings are likely to greatly influence the clinical use of these drugs (Parkinson Study Group, 2000; Rascol et al., 2000). COMT Inhibitors A recently developed class of drugs for the treatment of PD are inhibitors of the enzyme catecholO-methyltransferase (COMT). COMT and MAO are responsible for the catabolism of levodopa as well as dopamine. COMT transfers a methyl group from the donor S-adenosyl-L-methionine, producing the pharmacologically inactive compounds 3-O-methyl DOPA (from levodopa) and 3methoxytyramine (from dopamine) (Figure 229). When levodopa is administered orally, nearly 99% of the drug is catabolized and does not reach the brain. The majority is converted by aromatic

L-amino

acid decarboxylase (AAD) to dopamine, which causes nausea and hypotension. Addition of an AAD inhibitor, such as carbidopa, reduces the formation of dopamine, but increases the fraction of levodopa that is methylated by COMT. The principal therapeutic action of the COMT inhibitors is to block this peripheral conversion of levodopa to 3-O-methyl DOPA, increasing both the plasma half-life of levodopa as well as the fraction of each dose that reaches the central nervous system (Goetz, 1998). Figure 229. Catechol-O-Methyltransferase Inhibition. The principal site of action of inhibitors of catechol-O-methyltransferase (COMT) (such as tolcapone and entacapone) is in the peripheral circulation. They block the methylation of levodopa (L-DOPA) and increase the fraction of the drug available for delivery to the brain. AAD, aromatic L-amino acid decarboxylase; DA, dopamine; DOPAC, 3,4-dihydroxyphenylacetic acid; MAO, monoamine oxidase; 3MT, 3methoxytyramine; 3-O-MD, 3-O-methylDOPA.

Two COMT inhibitors presently are available for use in the United States, tolcapone (TASMAR) and entacapone (COMTAN ). Both of these agents have been shown in double-blind trials to reduce the clinical symptoms of "wearing-off" in patients treated with levodopa/carbidopa (Parkinson Study Group, 1997; Kurth et al., 1997). Although the magnitude of their clinical effects and mechanisms of action are similar, they differ with respect to pharmacokinetic properties and adverse effects. Tolcapone has a relatively long duration of action, allowing for administration two to three times a day, and appears to act both by central and peripheral inhibition of COMT. The duration of action of entacapone is short, around 2 hours, so it is usually administered simultaneously with each dose of levodopa/carbidopa. The action of entacapone is attributable principally to peripheral inhibition of COMT. The common adverse effects of these agents are similar to those observed in patients treated with levodopa/carbidopa alone, and include nausea, orthostatic hypotension, vivid dreams, confusion, and hallucinations. An important adverse effect associated with tolcapone is hepatotoxicity. In clinical trials, up to 2% of the patients treated were noted to have an increase in serum alanine aminotransferase and aspartate transaminase; after marketing, three fatal cases of fulminant hepatic failure in patients taking tolcapone were observed. At present, tolcapone should be used only in patients who have not responded to other therapies and with appropriate monitoring for hepatic injury. Entacapone has not been associated with hepatotoxicity and requires no special monitoring. Selegiline Two isoenzymes of MAO oxidize monoamines. While both isoenzymes (MAO-A and MAO-B) are present in the periphery and inactivate monoamines of intestinal origin, the isoenzyme MAO-B is

the predominant form in the striatum and is responsible for the majority of oxidative metabolism of dopamine in the striatum. At low-to-moderate doses (10 mg/day or less), selegiline (ELDEPRYL) is a selective inhibitor of MAO-B, leading to irreversible inhibition of the enzyme (Olanow, 1993). Unlike nonspecific inhibitors of MAO (such as phenelzine, tranylcypromine, and isocarboxazid), selegiline does not inhibit peripheral metabolism of catecholamines; thus, it can be taken safely with levodopa. Selegiline also does not cause the lethal potentiation of catecholamine action observed when patients taking nonspecific MAO inhibitors ingest indirectly acting sympathomimetic amines such as the tyramine found in certain cheeses and wine. Doses of selegiline higher than 10 mg daily can produce inhibition of MAO-A and should be avoided. Selegiline has been used for several years as a symptomatic treatment for PD, although its benefit is fairly modest. The basis of the efficacy of selegiline is presumed to be its ability to retard the breakdown of dopamine in the striatum. With the recent emergence of interest in the potential role of free radicals and oxidative stress in the pathogenesis of PD, it has been proposed that the ability of selegiline to retard the metabolism of dopamine might confer neuroprotective properties. In support of this idea, it was observed that selegiline could protect animals from MPTP-induced parkinsonism by blocking the conversion of MPTP to its toxic metabolite (1-methyl-4phenylpyridium ion), a transformation mediated by MAO-B. The potential protective role of selegiline in idiopathic PD was evaluated in multicenter randomized trials; these studies showed a symptomatic effect of selegiline in PD, but longer follow-up failed to provide any definite evidence of ability to retard the loss of dopaminergic neurons (Parkinson Study Group, 1993). Selegiline is generally well tolerated in patients with early or mild PD. In patients with more advanced PD or underlying cognitive impairment, selegiline may accentuate the adverse motor and cognitive effects of levodopa therapy. Metabolites of selegiline include amphetamine and methamphetamine, which may cause anxiety, insomnia, and other adverse symptoms. Interestingly, it has been observed that selegiline, like the nonspecific MAO inhibitors, can lead to the development of stupor, rigidity, agitation, and hyperthermia after administration of the analgesic meperidine; the basis of this interaction is uncertain. There also have been reports of adverse effects resulting from interactions between selegiline and tricyclic antidepressants and between selegiline and serotonin-reuptake inhibitors. Muscarinic Receptor Antagonists Antagonists of muscarinic acetylcholine receptors were widely used for the treatment of PD before the discovery of levodopa. The biological basis for the therapeutic actions of anticholinergics is not completely understood. It seems likely that they act within the neostriatum, through the receptors that normally mediate the response to the intrinsic cholinergic innervation of this structure, which arises primarily from cholinergic striatal interneurons. Several muscarinic cholinergic receptors have been cloned (see Chapters 7: Muscarinic Receptor Agonists and Antagonists and 12: Neurotransmission and the Central Nervous System); like the dopamine receptors, these are proteins with seven transmembrane domains that are linked to second-messenger systems by G proteins. Five subtypes of muscarinic receptors have been identified; at least four and probably all five subtypes are present in the striatum, although each has a distinct distribution (Hersch et al., 1994). Several drugs with anticholinergic properties are currently used in the treatment of PD, including trihexyphenidyl (ARTANE, 2 to 4 mg, three times per day), benztropine mesylate (COGENTIN, 1 to 4 mg, two times per day), and diphenhydramine hydrochloride (BENADRYL, 25 to 50 mg, 3 to 4 times per day). All have a modest antiparkinsonian action, which is useful in the treatment of early PD or as an adjunct to dopamimetic therapy. The adverse effects of these drugs are a result of their anticholinergic properties. Most troublesome is sedation and mental confusion, frequently seen in the elderly. They also may produce constipation, urinary retention, and blurred

vision through cycloplegia; they must be used with caution in patients with narrow-angle glaucoma. Amantadine Amantadine (SYMMETREL), an antiviral agent used for the prophylaxis and treatment of influenza A (see Chapter 50: Antimicrobial Agents: Antiviral Agents (Nonretroviral)), has antiparkinsonian actions. The mechanism of action of amantadine is not clear. It has been suggested that it might alter dopamine release or reuptake; anticholinergic properties also may contribute to its therapeutic actions. Amantadine and the closely related compound memantadine have activity at NMDA glutamate receptors, which may contribute to their antiparkinsonian actions (Stoof et al., 1992). In any case, the effects of amantadine in PD are modest. It is used as initial therapy of mild PD. It also may be helpful as an adjunct in patients on levodopa with dose-related fluctuations. Amantadine usually is administered in a dose of 100 mg twice a day and is well tolerated. Dizziness, lethargy, anticholinergic effects, and sleep disturbance, as well as nausea and vomiting, have been observed occasionally, but even when present these effects are mild and reversible. Alzheimer's Disease Clinical Overview AD produces an impairment of cognitive abilities that is gradual in onset but relentless in progression. Impairment of short-term memory usually is the first clinical feature, while retrieval of distant memories is preserved relatively well into the course of the disease. As the condition progresses, additional cognitive abilities are impaired, among them the ability to calculate, exercise visuospatial skills, and use common objects and tools (ideomotor apraxia). The level of arousal or alertness of the patient is not affected until the condition is very advanced, nor is there motor weakness, although muscular contractures are an almost universal feature of advanced stages of the disease. Death, most often from a complication of immobility such as pneumonia or pulmonary embolism, usually ensues within 6 to 12 years after onset. The diagnosis of AD is based on careful clinical assessment of the patient and appropriate laboratory tests to exclude other disorders that may mimic AD; at present, no direct antemortem confirmatory test exists. Pathophysiology AD is characterized by marked atrophy of the cerebral cortex and loss of cortical and subcortical neurons. The pathological hallmarks of AD are senile plaques, which are spherical accumulations of the protein -amyloid accompanied by degenerating neuronal processes, and neurofibrillary tangles, composed of paired helical filaments and other proteins (Arnold et al., 1991; Arriagada et al., 1992; Braak and Braak, 1994). Although small numbers of senile plaques and neurofibrillary tangles can be observed in intellectually normal individuals, they are far more abundant in AD, and the abundance of tangles is roughly proportional to the severity of cognitive impairment. In advanced AD, senile plaques and neurofibrillary tangles are numerous. They are most abundant in the hippocampus and associative regions of the cortex, whereas areas such as the visual and motor cortices are relatively spared. This corresponds to the clinical features of marked impairment of memory and abstract reasoning, with preservation of vision and movement. The factors underlying the selective vulnerability of particular cortical neurons to the pathological effects of AD are not known. Neurochemistry The neurochemical disturbances that arise in AD have been studied intensively (Johnston, 1992).

Direct analysis of neurotransmitter content in the cerebral cortex shows a reduction of many transmitter substances that parallels neuronal loss; there is a striking and disproportionate deficiency of acetylcholine. The anatomical basis of the cholinergic deficit is the atrophy and degeneration of subcortical cholinergic neurons, particularly those in the basal forebrain (nucleus basalis of Meynert), that provide cholinergic innervation to the whole cerebral cortex. The selective deficiency of acetylcholine in AD, as well as the observation that central cholinergic antagonists such as atropine can induce a confusional state that bears some resemblance to the dementia of AD, has given rise to the "cholinergic hypothesis," which proposes that a deficiency of acetylcholine is critical in the genesis of the symptoms of AD (Perry, 1986). Although the conceptualization of AD as a "cholinergic deficiency syndrome" in parallel with the "dopaminergic deficiency syndrome" of PD provides a useful framework, it is important to note that the deficit in AD is far more complex, involving multiple neurotransmitter systems, including serotonin, glutamate, and neuropeptides, and that in AD there is destruction of not only cholinergic neurons but also the cortical and hippocampal targets that receive cholinergic input. Role of -Amyloid The presence of aggregates of -amyloid is a constant feature of AD. Until recently, it was not clear whether the amyloid protein was causally linked to the disease process or merely a by-product of neuronal death. The application of molecular genetics has shed considerable light on this question. -Amyloid was isolated from affected brains and found to be a short polypeptide of 42 to 43 amino acids. This information led to cloning of amyloid precursor protein (APP), a much larger protein of more than 700 amino acids, which is widely expressed by neurons throughout the brain in normal individuals as well as in those with AD. The function of APP is unknown, although the structural features of the protein suggest that it may serve as a cell surface receptor for an as-yet-unidentified ligand. The production of -amyloid from APP appears to result from abnormal proteolytic cleavage of APP by the recently isolated enzyme BACE ( -site APP-cleaving enzyme). This may be an important target of future therapies (Vassar et al., 1999). Analysis of APP gene structure in pedigrees exhibiting autosomal dominant inheritance of AD has shown that, in some families, mutations of the -amyloidforming region of APP are present, while in others, mutations of proteins involved in the processing of APP have been implicated (Selkoe, 1998). These results demonstrate that it is possible for abnormalities in APP or its processing to cause AD. The vast majority of cases of AD, however, are not familial, and structural abnormality of APP or related proteins has not been observed consistently in these sporadic cases of AD. As noted above, common alleles of the apo E protein have been found to influence the probability of developing AD. This suggests that modifying the metabolism of APP might alter the course of AD in both familial and sporadic cases (Whyte et al., 1994), but no clinically practical strategies have been developed yet. Treatment of Alzheimer's Disease A major approach to the treatment of AD has involved attempts to augment the cholinergic function of the brain (Johnston, 1992). An early approach was the use of precursors of acetylcholine synthesis, such as choline chloride and phosphatidyl choline (lecithin). Although these supplements generally are well tolerated, randomized trials have failed to demonstrate any clinically significant efficacy. Direct intracerebroventricular injection of cholinergic agonists such as bethanacol appears to have some beneficial effects, although this requires surgical implantation of a reservoir connecting to the subarachnoid space and is too cumbersome and intrusive for practical use. A somewhat more successful strategy has been the use of inhibitors of acetylcholinesterase (AChE), the catabolic enzyme for acetylcholine (see Chapter 8: Anticholinesterase Agents). Physostigmine, a

rapidly acting, reversible AChE inhibitor, produces improved responses in animal models of learning, and some studies have demonstrated mild transitory improvement in memory following physostigmine treatment in patients with AD. The use of physostigmine has been limited because of its short half-life and tendency to produce symptoms of systemic cholinergic excess at therapeutic doses. Four inhibitors of AChE currently are approved by the United States Food and Drug Administration for treatment of Alzheimer's disease: tacrine (1,2,3,4-tetrahydro-9-aminoacridine; COGNEX), donepezil (ARICEPT) (Mayeux and Sano, 1999), rivastigmine (EXCELON ), and galantamine (REMINYL). Tacrine is a potent, centrally acting inhibitor of AChE (Freeman and Dawson, 1991). Studies of oral tacrine in combination with lecithin have confirmed that there is indeed an effect of tacrine on some measures of memory performance, but the magnitude of improvement observed with the combination of lecithin and tacrine is modest at best (Chatellier and Lacomblez, 1990). The side effects of tacrine often are significant and dose-limiting; abdominal cramping, anorexia, nausea, vomiting, and diarrhea are observed in up to one-third of patients receiving therapeutic doses, and elevations of serum transaminases are observed in up to 50% of those treated. Because of the significant side-effect profile, tacrine is not widely used in clinical practice. Donepezil is a selective inhibitor of AChE in the CNS with little effect on AChE in peripheral tissues. It produces modest improvements in cognitive scores in Alzheimer's disease patients (Rogers and Friedhoff, 1988) and has a long half-life (see Appendix II), allowing once-daily dosing. Rivastigmine and galantamine are dosed twice daily and produce a similar degree of cognitive improvement. Adverse effects associated with donepezil, rivastigmine, and galantamine are similar in character but generally less frequent and less severe than those observed with tacrine; they include nausea, diarrhea, vomiting, and insomnia. Donepezil, rivastigmine, and galantamine are not associated with the hepatotoxicity that limits the use of tacrine. Drugs currently under development for treatment of Alzheimer's disease include additional anticholinesterase agents as well as agents representing other pharmacological approaches. Memantine, an NMDA-receptor antagonist, has shown promise in clinical trials of slowing the progression of AD in patients with moderately severe disease. Antioxidants, antiinflammatory agents, and estrogens have been studied, but none of these has established efficacy. The identification of APP and the enzymes involved in the processing of this protein has opened the door to the development of antiaggregants, a -amyloid vaccine, and modifiers of APP processing, which may represent the next generation of Alzheimer's therapy. Huntington's Disease Clinical Features HD is a dominantly inherited disorder characterized by the gradual onset of motor incoordination and cognitive decline in midlife. Symptoms develop insidiously, either as a movement disorder manifest by brief jerk-like movements of the extremities, trunk, face, and neck (chorea) or by personality changes, or both. Fine motor incoordination and impairment of rapid eye movements are early features. Occasionally, especially when the onset of symptoms occurs before the age of 20, choreic movements are less prominent; instead, bradykinesia and dystonia predominate. As the disorder progresses, the involuntary movements become more severe, dysarthria and dysphagia develop, and balance is impaired. The cognitive disorder manifests itself first by slowness of mental processing and difficulty in organizing complex tasks. Memory is affected, but affected persons rarely lose their memory of family, friends, and the immediate situation. Such persons often become irritable, anxious, and depressed. Less frequently, paranoia and delusional states are manifest. The outcome of HD is invariably fatal; over a course of 15 to 30 years, the affected person becomes

totally disabled and unable to communicate, requiring full-time care; death ensues from the complications of immobility (Hayden, 1981; Harper, 1991, 1992). Pathology and Pathophysiology HD is characterized by prominent neuronal loss in the caudate/putamen of the brain (Vonsattel et al., 1985). Atrophy of these structures proceeds in an orderly fashion, first affecting the tail of the caudate nucleus and then proceeding anteriorly, from medialdorsal to lateralventral. Other areas of the brain also are affected, although much less severely; morphometric analyses indicate that there are fewer neurons in cerebral cortex, hypothalamus, and thalamus. Even within the striatum, the neuronal degeneration of HD is selective. Interneurons and afferent terminals are largely spared, while the striatal projection neurons (the medium spiny neurons) are severely affected. This leads to large decreases in striatal GABA concentrations, whereas somatostatin and dopamine concentrations are relatively preserved (Ferrante et al., 1987; Reiner et al., 1988). Selective vulnerability also appears to underlie the most conspicuous clinical feature of HD, the development of chorea. In most adult-onset cases, the medium spiny neurons that project to LGP and SNpr (the indirect pathway) appear to be affected earlier than those projecting to the MGP (the direct pathway; Albin et al., 1990, 1992). The disproportionate impairment of the indirect pathway increases excitatory drive to the neocortex, producing involuntary choreiform movements (Figure 2210). In some individuals, rigidity rather than chorea is the predominant clinical feature; this is especially common in juvenile-onset cases. In these cases the striatal neurons giving rise to both the direct and indirect pathways are impaired to a comparable degree. Figure 2210. The Basal Ganglia in Huntington's Disease (HD). HD is characterized by loss of neurons from the STR. The neurons that project to the LGP and form the indirect pathway are affected earlier in the course of the disease than those that project to the MGP. This leads to a loss of inhibition of the LGP. The increased activity in this structure in turn inhibits the STN, SNpr, and MGP, resulting in a loss of inhibition to the VA/VL thalamus and increased thalamocortical excitatory drive. Thin line, normal pathway activity; thick line, increased pathway activity in HD; dashed line, reduced pathway activity in HD. (See legend to Figure 226 for definitions of anatomical abbreviations.)

Genetics HD is an autosomal dominant disorder with nearly complete penetrance. The average age of onset is between 35 and 45 years, but the range varies from as early as age two to as late as the mid-eighties. Although the disease is inherited equally from mother and father, more than 80% of those developing symptoms before the age of 20 inherit the defect from the father. This is an example of anticipation, or the tendency for the age of onset of a disease to decline with each succeeding generation, which also is observed in other neurodegenerative diseases with similar genetic mechanisms. Known homozygotes for HD show clinical characteristics identical to the typical HD heterozygote, indicating that the unaffected chromosome does not attenuate the disease symptomatology. Until the discovery of the genetic defect responsible for HD, de novo mutations causing HD were thought to be unusual; but it is clear now that the disease can arise from unaffected parents, especially when one carries an "intermediate allele," as described below. The discovery of the genetic mutation responsible for Huntington's disease was the product of an arduous, ten-year, multi-investigator, collaborative effort. In 1993 a region near the telomere of chromosome 4 was found to contain a polymorphic (CAG)n trinucleotide repeat that was significantly expanded in all individuals with HD (Huntington's Disease Collaborative Research Group, 1993). The expansion of this trinucleotide repeat is the genetic alteration responsible for HD. The range of CAG repeat length in normal individuals is between 9 and 34 triplets, with a median repeat length on normal chromosomes of 19. The repeat length in HD varies from 40 to over 100. Repeat lengths of 35 to 39 represent intermediate alleles; some of these individuals develop HD late in life, while others are not affected. Repeat length is correlated inversely with age of onset. The younger the age of onset, the higher the probability of a large repeat number. This correlation is most powerful in individuals with onset before the age of 30; with onset above the age of 30, the correlation is weaker. Thus, repeat length cannot serve as an adequate predictor of age of

onset in most individuals. Subsequent work has shown that several other neurodegenerative diseases also arise through expansion of a CAG repeat, including hereditary spinocerebellar ataxias and Kennedy's disease, a rare inherited disorder of motor neurons (Paulson and Fischbeck, 1996). Selective Vulnerability The mechanism by which the expanded trinucleotide repeat leads to the clinical and pathological features of HD is unknown. The HD mutation lies within a gene designated IT15. The IT15 gene itself is very large (10 kilobases) and encodes a protein of approximately 348,000 daltons or 3144 amino acids. The trinucleotide repeat, which encodes the amino acid glutamine, occurs at the 5'-end of IT15 and is followed directly by a second, shorter repeat of (CCG)n, which encodes the amino acid proline. The protein, named huntingtin, does not resemble any other known protein, and the normal function of the protein has not been identified. Mice with a genetic "knockout" of huntingtin die early in embryonic life, so it must have an essential cellular function. It is thought that the mutation results in a gain of function; that is, the mutant protein acquires a new function or property not found in the normal protein. The HD gene is expressed widely throughout the body. High levels of expression are present in brain, pancreas, intestine, muscle, liver, adrenals, and testes. In brain, expression of IT15 does not appear to be correlated with neuron vulnerability; although the striatum is most severely affected, neurons in all regions of the brain express similar levels of IT15 mRNA (Landwehrmeyer et al., 1995). The ability of the HD mutation to produce selective neural degeneration despite nearly universal expression of the gene among neurons may be related to metabolic or excitotoxic mechanisms. For many years, it has been noted that HD patients are thin, suggesting the presence of a systemic disturbance of energy metabolism. In animal models, agonists for the NMDA subtype of excitatory amino acid receptor can cause pathology similar to that seen in HD when they are injected into the striatum (Beal et al., 1986). More interesting, however, is the fact that inhibitors of complex II of the mitochondrial respiratory chain also can produce HD-like striatal lesionseven when given systemically (Beal et al., 1993). Furthermore, this pathology can be diminished by NMDA-receptor antagonists, suggesting that this is an example of a metabolic impairment giving rise to excitotoxic neuronal injury. Studies employing magnetic resonance imaging (MRI) spectroscopy have provided direct evidence of an alteration in energy metabolism in HD in vivo (Jenkins et al., 1993). Thus, the link between the widespread expression of the gene for the abnormal IT15 protein in HD and the selective vulnerability of neurons in the disease may arise from the interaction of a widespread defect in energy metabolism with the intrinsic properties of striatal neurons, including their capacity and need for oxidative metabolism as well as the types of glutamate receptors present. This hypothesis has a number of potentially important therapeutic implications. It is unlikely that it will be possible in the near future to correct the genetic defect in the brains of individuals with HD, but it may be possible to develop agents that alter metabolic function or protect against excitotoxic injury and thereby arrest or modify the course of the disease. Symptomatic Treatment of Huntington's Disease Practical treatment for symptomatic HD emphasizes the selective use of medications (Shoulson, 1992). No current medication slows the progression of the disease, and many medications can impair function because of side effects. Treatment is needed for patients who are depressed, irritable, paranoid, excessively anxious, or psychotic. Depression can be treated effectively with standard antidepressant drugs with the caveat that those drugs with substantial anticholinergic profiles can exacerbate chorea. Fluoxetine (Chapter 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders) is effective treatment for both the depression and the irritability manifest in symptomatic HD. Carbamazepine (Chapter 21: Drugs Effective in the

Therapy of the Epilepsies) also has been found to be effective for depression. Paranoia, delusional states, and psychosis usually require treatment with antipsychotic drugs, but the doses required often are lower than those usually used in primary psychiatric disorders. These agents also reduce cognitive function and impair mobility and thus should be used in the lowest doses possible and be discontinued when the psychiatric symptoms are resolved. In individuals with predominantly rigid HD, clozapine (Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) or carbamazepine may be more effective for treatment of paranoia and psychosis. The movement disorder of HD per se only rarely justifies pharmacological therapy. For those with large-amplitude chorea causing frequent falls and injury, dopamine-depleting agents such as tetrabenazine or reserpine (Chapter 33: Antihypertensive Agents and the Drug Therapy of Hypertension) can be tried, although patients must be monitored for hypotension and depression. Antipsychotic agents also can be used, but these often do not improve overall function because they decrease fine motor coordination and increase rigidity. Many HD patients exhibit worsening of involuntary movements as a result of anxiety or stress. In these situations, judicious use of sedative or anxiolytic benzodiazepines can be very helpful. In juvenile-onset cases where rigidity rather than chorea predominates, dopamine agonists have had variable success in the improvement of rigidity. These individuals also occasionally develop myoclonus and seizures that can be responsive to clonazepam, valproic acid, or other anticonvulsants. Amyotrophic Lateral Sclerosis Clinical Features and Pathology ALS is a disorder of the motor neurons of the ventral horn of the spinal cord and the cortical neurons that provide their afferent input. The ratio of males to females affected is approximately 1.5:1 (Kurtzke, 1982). The disorder is characterized by rapidly progressive weakness, muscle atrophy and fasciculations, spasticity, dysarthria, dysphagia, and respiratory compromise. Sensory function generally is spared, as is cognitive, autonomic, and oculomotor activity. ALS usually is progressive and fatal, with most affected patients dying of respiratory compromise and pneumonia after 2 to 3 years, although occasional individuals have a more indolent course and survive for many years. The pathology of ALS corresponds closely to the clinical features: There is prominent loss of the spinal and brainstem motor neurons that project to striated muscles (although the oculomotor neurons are spared) as well as loss of the large pyramidal motor neurons in layer V of motor cortex, which are the origin of the descending corticospinal tracts. In familial cases, Clarke's column and the dorsal horns sometimes are affected (Caroscio et al., 1987; Rowland, 1994). Etiology About 10% of cases of ALS are familial (FALS), usually with an autosomal dominant pattern of inheritance (Jackson and Bryan, 1998). Most of the mutations responsible have not been identified, but an important subset of FALS patients are families with a mutation in the gene for the enzyme superoxide dismutase (SOD1) (Rosen et al., 1993). Mutations in this protein account for about 20% of cases of FALS. Most of the mutations are alterations of single amino acids, but more than 30 different alleles have been found in different kindreds. Transgenic mice expressing mutant human SOD1 develop a progressive degeneration of motor neurons that closely mimics the human disease, providing an important animal model for research and pharmaceutical trials. Interestingly, many of the mutations of SOD1 that can cause disease do not reduce the capacity of the enzyme to perform its primary function, the catabolism of potentially toxic superoxide radicals. Thus, as may be the case in HD, mutations in SOD1 may confer a toxic "gain of function," the precise nature of which is

unclear. More than 90% of ALS cases are sporadic and are not associated with abnormalities of SOD1 or any other known gene. The cause of the motor neuron loss in sporadic ALS is unknown, but theories include autoimmunity, excitotoxicity, free radical toxicity, and viral infection (Rowland, 1994; Cleveland, 1999). Most of these ideas are not well supported by available data, but there is evidence that glutamate reuptake may be abnormal in the disease, leading to accumulation of glutamate and excitotoxic injury (Rothstein et al., 1992). The only currently approved therapy for ALS, riluzole, is based on these observations. Spasticity and the Spinal Reflex Spasticity is an important component of the clinical features of ALS, in that the presence of spasticity often leads to considerable pain and discomfort and reduces mobility, which already is compromised by weakness. Furthermore, spasticity is the feature of ALS that is most amenable to present forms of treatment. Spasticity is defined as an increase in muscle tone characterized by an initial resistance to passive displacement of a limb at a joint, followed by a sudden relaxation (the so-called clasped-knife phenomenon). Spasticity is the result of the loss of descending inputs to the spinal motor neurons, and the character of the spasticity depends on which nervous system pathways are affected (Davidoff, 1990). Whole repertoires of movement can be generated directly at the spinal cord level; it is beyond the scope of this chapter to describe these in detail. The monosynaptic tendon-stretch reflex is the simplest of the spinal mechanisms contributing to spasticity. Primary Ia afferents from muscle spindles, activated when the muscle is rapidly stretched, synapse directly on motor neurons going to the stretched muscle, causing it to contract and resist the movement. A collateral of the primary Ia afferent synapses on an "Ia-coupled interneuron" that inhibits the motor neurons innervating the antagonist of the stretched muscle, allowing the contraction of the muscle to be unopposed. Upper motor neurons from the cerebral cortex (the pyramidal neurons) suppress spinal reflexes and the lower motor neurons indirectly by activating the spinal cord inhibitory interneuron pools. The pyramidal neurons use glutamate as a neurotransmitter. When the pyramidal influences are removed, the reflexes are released from inhibition and become more active, leading to hyperreflexia. Other descending pathways from brainstemincluding the rubro-, reticulo-, and vestibulospinal pathways and the descending catecholamine pathwaysalso influence spinal reflex activity. When just the pyramidal pathway is affected, extensor tone in the legs and flexor tone in the arms are increased. When the vestibulospinal and catecholamine pathways are impaired, increased flexion of all extremities is observed and light cutaneous stimulation can lead to disabling whole-body spasms. In ALS, pyramidal pathways are impaired with relative preservation of the other descending pathways, resulting in hyperactive deep-tendon reflexes, impaired fine motor coordination, increased extensor tone in the legs, and increased flexor tone in the arms. The gag reflex often is overactive as well. Treatment of ALS with Riluzole Riluzole (2-amino-6-[trifluoromethoxy]benzothiazole; RILUTEK) is an agent with complex actions in the nervous system (Bryson et al., 1996; Wagner and Landis, 1997). Its structure is as follows:

Riluzole is orally absorbed and highly protein bound. It undergoes extensive metabolism in the liver by both cytochrome P450-mediated hydroxylation and by glucuronidation. Its half-life is about 12 hours. In vitro studies have shown that riluzole has both presynaptic and postsynaptic effects. It inhibits glutamate release, but it also blocks postsynaptic NMDA- and kainate-type glutamate receptors and inhibits voltage-dependent sodium channels. Some of the effects of riluzole in vitro are blocked by pertussis toxin, implicating the drug's interaction with an as-yet unidentified G proteincoupled receptor. In clinical trials, riluzole has modest but genuine effects on the survival of patients with ALS. In the largest trial conducted to date, with nearly 1000 patients, the median duration of survival was extended by about 60 days (Lacomblez et al., 1996). The recommended dose is 50 mg every 12 hours, taken 1 hour before or 2 hours after a meal. Riluzole usually is well tolerated, although nausea or diarrhea may occur. Rarely, riluzole may produce hepatic injury with elevations of serum transaminases, and periodic monitoring of these is recommended. Although the magnitude of the effect of riluzole on ALS is small, it represents a significant therapeutic milestone in the treatment of a disease refractory to all previous treatments. Symptomatic Therapy of Spasticity The most useful agent for the symptomatic treatment of spasticity in ALS is baclofen (LIORESAL), a GABAB agonist. Initial doses of 5 to 10 mg a day are recommended, but the dose can be increased to as much as 200 mg a day if necessary. If weakness occurs, the dose should be lowered. In addition to oral administration, baclofen also can be delivered directly into the space around the spinal cord by use of a surgically implanted pump and an intrathecal catheter. This approach minimizes the adverse effects of the drug, especially sedation, but it carries the risk of potentially life-threatening CNS depression and should be used only by physicians trained in delivering chronic intrathecal therapy. Tizanidine (ZANFLEX) is an agonist of 2-adrenergic receptors in the central nervous system. It reduces muscle spasticity and is assumed to act by increasing presynaptic inhibition of motor neurons. Tizanidine is most widely used in the treatment of spasticity in multiple sclerosis or after stroke, but it also may be effective in patients with ALS. Treatment should be initiated at a low dose of 2 to 4 mg at bedtime, and titrated upwards gradually. Drowsiness, asthenia, and dizziness may limit the dose that can be administered. Benzodiazepines (see Chapter 17: Hypnotics and Sedatives), such as clonazepam (KLONIPIN) are effective antispasmodics, but they may contribute to respiratory depression in patients with advanced ALS. Dantrolene (DANTRIUM) also is approved in the United States for the treatment of muscle spasm. In contrast to other agents discussed, dantrolene acts directly on skeletal muscle fibers, impairing calcium ion flux across the sarcoplasmic reticulum. Because it can exacerbate muscular weakness, it is not used in ALS, but is effective in treating spasticity associated with stroke or spinal cord injury and in treating malignant hyperthermia. Dantrolene may cause hepatotoxicity, so it is important to perform liver function tests before and during therapy with the drug. Prospectus Although advances in the symptomatic therapy of the neurodegenerative disorders, particularly PD, have improved the lives of many patients, the goal of current research is to develop treatments that can prevent, retard, or reverse neuronal cell death. Promising areas for drug development are the mechanisms implicated in several of the disorders: excitotoxicity, defects in energy metabolism, and oxidative stress. Glutamate antagonists have great potential, but their use is limited by the relatively nonselective activity of the available agents. Increased knowledge of the structure and function of glutamate receptor subtypes should make more selective and useful agents available. Pharmacological reduction of oxidative stress also is feasible, despite the disappointing results of initial clinical trials with tocopherol and selegiline. Neural growth factors are another important area for drug development. Several factors that promote the differentiation of neurons and the

establishment of neural connections during development have been identified, and these may eventually prove useful in retarding or reversing neuronal death. A more direct and currently accessible approach to reversing neuronal loss is surgical transplantation of neurons; this has been accomplished in PD with a moderate degree of success and has been proposed as a treatment for other conditions such as AD. In addition to these general approaches to neurodegeneration, more specific treatments for the various diseases should become feasible with advances in knowledge of their etiology. For example, discovery of the role of -amyloid in AD has sparked the study of agents that alter its synthesis or prevent its accumulation; similarly, discovery of the HD gene is likely to lead to novel treatment strategies for that disorder. For further information regarding neurodegenerative diseases for which the drugs discussed in this chapter are useful, the reader is referred to the following chapters in Harrison's Principles of Internal Medicine, 16th ed., McGraw-Hill, New York, 2005: Parkinson's disease, Chapter 351; Alzheimer's disease and Huntington's disease, Chapter 350; and amyotrophic lateral sclerosis, Chapter 353.

Chapter 23. Opioid Analgesics


Overview Opioids have been the mainstay of pain treatment for thousands of years, and remain so today. Opioids exert their therapeutic effects by mimicking the action of endogenous opioid peptides at opioid receptors. Effects on both local neurons and intrinsic pain-modulating circuitry lead to analgesia, other therapeutic effects, and also to undesirable side effects. This chapter will provide the background necessary to understand the mechanisms of action and important pharmacological properties of clinically used opioids. First, the endogenous opioid system is discussed with a focus on the receptors and circuitry utilized by the opioids. A discussion of clinically used compounds follows, describing in detail their pharmacological properties and therapeutic uses. Routes of administration, pain treatment strategies, and current therapeutic guidelines also are presented. This information should provide a rational basis for understanding opioid actions, thereby reducing fear of opioid use and encouraging effective treatment of pain. Opioid Analgesics: Introduction It is now well known that opioids such as heroin and morphine exert their effects by mimicking naturally occurring substances, termed endogenous opioid peptides or endorphins. Much now is known about the basic biology of the endogenous opioid system and its molecular and biochemical complexity, widespread anatomy, and diversity. The diverse functions of this system include the best-known sensory role, prominent in inhibiting responses to painful stimuli; a modulatory role in gastrointestinal, endocrine, and autonomic functions; an emotional role, evident in the powerful rewarding and addicting properties of opioids; and a cognitive role in the modulation of learning and memory. The endogenous opioid system is complex and subtle, with a great diversity in endogenous ligands (over a dozen), yet with only four major receptor types. This chapter presents key facts about the biochemical and functional nature of the opioid system. This information then is used to establish a basis for understanding the actions of clinically used opioid drugs and current strategies for pain treatment. Terminology

The term opioid refers broadly to all compounds related to opium. The word opium is derived from opos, the Greek word for juice, the drug being derived from the juice of the opium poppy, Papaver somniferum. Opiates are drugs derived from opium, and include the natural products morphine, codeine, thebaine, and many semisynthetic congeners derived from them. Endogenous opioid peptides are the naturally occurring ligands for opioid receptors. The term endorphin is used synonymously with endogenous opioid peptides, but also refers to a specific endogenous opioid, endorphin. The term narcotic was derived from the Greek word for stupor. At one time, the term referred to any drug that induced sleep, but then it became associated with opioids. It often is used in a legal context to refer to a variety of substances with abuse or addictive potential. History The first undisputed reference to opium is found in the writings of Theophrastus in the third century B.C. Arabian physicians were well versed in the uses of opium; Arabian traders introduced the drug to the Orient, where it was employed mainly for the control of dysenteries. During the Middle Ages, many of the uses of opium were appreciated. In 1680, Sydenham wrote: "Among the remedies which it has pleased Almighty God to give to man to relieve his sufferings, none is so universal and so efficacious as opium." Opium contains more than 20 distinct alkaloids. In 1806, Sertrner reported the isolation of a pure substance in opium that he named morphine, after Morpheus, the Greek god of dreams. The discovery of other alkaloids in opium quickly followedcodeine by Robiquet in 1832, and papaverine by Merck in 1848. By the middle of the nineteenth century, the use of pure alkaloids rather than crude opium preparations began to spread throughout the medical world. In addition to the remarkable beneficial effects of opioids, the toxic side effects and addictive potential of these drugs also have been known for centuries. These problems stimulated a search for potent, synthetic opioid analgesics free of addictive potential and other side effects. Unfortunately, all of the synthetic compounds that have been introduced into clinical use share the liabilities of classical opioids. However, the search for new opioid agonists led to the synthesis of opioid antagonists and compounds with mixed agonist/antagonist properties, which expanded therapeutic options and provided important tools for exploring mechanisms of opioid actions. Until the early 1970s, the endogenous opioid system was totally unknown. The actions of morphine, heroin, and other opioids as antinociceptive and addictive agents, while well described, often were studied in the context of interactions with other neurotransmitter systems, such as monoaminergic and cholinergic. Some investigators suggested the existence of a specific opioid receptor because of the unique structural requirements of opiate ligands (Beckett and Casy, 1954), but the presence of an opiate-like system in the brain remained unproven. A particularly misleading observation was that the administration of the opioid antagonist naloxone to a normal animal produced little effect, although the drug was effective in reversing or preventing the effects of exogenous opiates. The first physiological evidence suggesting an endogenous opioid system was the demonstration that analgesia produced by electrical stimulation of certain brain regions was reversed by naloxone (Akil et al., 1972; Akil et al., 1976). Pharmacological evidence for an opiate receptor also was building. In 1973, investigators in three laboratories demonstrated opiate binding sites in the brain (Pert and Snyder, 1973; Simon et al., 1973; Terenius, 1973). This was the first use of radioligand binding assays to demonstrate the presence of membrane-associated neurotransmitter receptors in the brain. Stimulation-produced analgesia, its naloxone reversibility, and the discovery of opioid receptors strongly pointed to the existence of endogenous opioids. In 1975, Hughes and associates identified an endogenous, opiate-like factor that they called enkephalin (from the head) (Hughes et al., 1975).

Soon after, two more classes of endogenous opioid peptides were isolated, the dynorphins and endorphins. Details of these discoveries and the unique properties of the opioid peptides have been reviewed previously (Akil et al., 1984). Given the large number of endogenous ligands being discovered, it was not surprising that multiple classes of opioid receptors also were found. The concept of opioid-receptor multiplicity arose shortly after the initial demonstration of opiate binding sites. Based on results of in vivo studies in dogs, Martin and colleagues postulated the existence of multiple types of opiate receptors (Martin et al., 1976). Receptor-binding studies and subsequent cloning confirmed the existence of three main receptor types, , , and . A fourth member of the opioid peptide receptor family, the nociceptin/orphanin FQ (N/OFQ) receptor, was cloned in 1994 (Bunzow et al., 1994; Mollereau et al., 1994). In addition to these four major classes, a number of subtypes have been proposed, such as epsilon, often based on bioassays from different species (Schulz et al., 1979); iota (Oka, 1980); lambda (Grevel and Sadee, 1983); and zeta (Zagon et al., 1989). In 2000, the Committee on Receptor Nomenclature and Drug Classification of the International Union of Pharmacology adopted the terms MOP, DOP, and KOP to indicate -, -, and -opioid peptide receptors, respectively. The original Greek letter designation is used in this and other chapters. The Committee also recommended the term NOP for the N/OFQ receptor. Endogenous Opioid Peptides Three distinct families of classical opioid peptides have been identified: the enkephalins, endorphins, and dynorphins. Each family is derived from a distinct precursor polypeptide and has a characteristic anatomical distribution. These precursors, preproopiomelanocortin, preproenkephalin, and preprodynorphin, are encoded by three corresponding genes. Each precursor is subject to complex cleavages and posttranslational modifications resulting in the synthesis of multiple active peptides. The opioid peptides share the common amino-terminal sequence of Tyr-Gly-Gly-Phe(Met or Leu), which has been called the "opioid motif." This motif is followed by various Cterminal extensions yielding peptides ranging from 5 to 31 residues (Table 231). The major opioid peptide derived from proopiomelanocortin (POMC) is -endorphin. Although endorphin contains the sequence for met-enkephalin at its amino terminus, it is not converted to this peptide; met-enkephalin is derived from the processing of preproenkephalin. In addition to endorphin, the POMC precursor also is processed into the nonopioid peptides adrenocorticotropic hormone (ACTH), melanocyte-stimulating hormone ( -MSH), and -lipotropin ( -LPH). Previous biochemical work (Mains et al., 1977) had suggested a common precursor for the stress hormone ACTH and the opioid peptide -endorphin. This association implied a close physiological linkage between the stress axis and opioid systems, which was validated by many studies of the phenomenon of stress-induced analgesia (Akil et al., 1986). Proenkephalin contains multiple copies of met-enkephalin as well as a single copy of leu-enkephalin. Prodynorphin contains three peptides of differing lengths that all begin with the leu-enkephalin sequence: dynorphin A, dynorphin B, and neoendorphin (Figure 231). The anatomical distribution of these peptides in the central nervous system (CNS) has been reviewed thoroughly by Mansour et al. (1988). Figure 231. Peptide Precursors. (From Akil et al., 1998.) POMC, proopiomelanocortin; ACTH, adrenocorticotropic hormone; -LPH, -lipotropin.

A novel endogenous opioid peptide was cloned in 1995 (Meunier et al., 1995; Reinscheid et al., 1995). This peptide has a significant sequence homology to dynorphin A, with an identical length of 17 amino acids, identical carboxy-terminal residues, and a slight modification of the amino-terminal opioid core (Phe-Gly-Gly-Phe instead of Tyr-Gly-Gly-Phe; see Table 231). The removal of this single hydroxyl group is sufficient to abolish interactions with the three classical opioid-peptide receptors. This peptide was called orphanin FQ (OFQ) by one group of investigators and nociceptin (N) by another, because it lowered pain threshold under certain conditions. The structure of the N/OFQ precursor (Figure 232) suggests that it may encode other biologically active peptides (Nothacker et al., 1996; Pan et al., 1996). Immediately downstream of N/OFQ is a 17-amino-acid peptide (orphanin-2), which also starts with phenylalanine and ends with glutamine but is otherwise distinct from N/OFQ, as well as a putative peptide upstream from N/OFQ, which may be liberated upon posttranslational processing (nocistatin). The N/OFQ system represents a new neuropeptide system with a high degree of sequence identity to the opioid peptides. However, the slight change in structure results in a profound alteration in function. N/OFQ has behavioral and pain modulatory properties distinct from those of the three classical opioid peptides (see below). Figure 232. Human Proorphanin-Derived Peptides.

The anatomical distribution of POMC-producing cells is relatively limited within the CNS, occurring mainly in the arcuate nucleus and nucleus tractus solitarius. These neurons project widely to limbic and brainstem areas and to the spinal cord (Lewis et al., 1987). There also is evidence of POMC production in the spinal cord (Gutstein et al., 1992). The distribution of POMC corresponds to areas of the human brain where electrical stimulation can relieve pain (Pilcher et al., 1988). Peptides from POMC occur in both the pars intermedia and the pars distalis of the pituitary and also are contained in pancreatic islet cells. The peptides from prodynorphin and proenkephalin are distributed widely throughout the CNS and frequently are found together. Although each family of peptides usually is located in different groups of neurons, occasionally more than one family is expressed within the same neuron (Weihe et al., 1988). Of particular note, proenkephalin peptides are present in areas of the CNS that are presumed to be related to the perception of pain (e.g., laminae I and II of the spinal cord, the spinal trigeminal nucleus, and the periaqueductal gray), to the modulation of affective behavior (e.g., amygdala, hippocampus, locus ceruleus, and the cerebral cortex), to the modulation of motor control (caudate nucleus and globus pallidus), and the regulation of the autonomic nervous system (medulla oblongata) and neuroendocrinological functions (median eminence). Although there are a few long enkephalinergic fiber tracts, these peptides are contained primarily in interneurons with short axons. The peptides from proenkephalin also are found in the adrenal medulla and in nerve plexuses and exocrine glands of the stomach and intestine. The N/OFQ precursor has a unique anatomical distribution (Neal et al., 1999b). The distribution of this system suggests important roles in hippocampus, cortex, and numerous sensory sites. N/OFQ produces a complex behavioral profile, including effects on drug reward and reinforcement (Bertorelli et al., 2000; Devine et al., 1996a; Devine et al., 1996b), stress responsiveness (Devine et al., 2001; Koster et al., 1999), and learning and memory processes (Koster et al., 1999; Manabe et al., 1998). Studies of the effect of N/OFQ on pain sensitivity have produced conflicting results, which may be reconciled by data suggesting that the effects of N/OFQ on pain sensitivity depend on the underlying behavioral state of the animal (Pan et al., 2000) (see below). Analogous mechanisms also could explain some of the conflicting results with other physiological processes. However, more studies are needed before a general role can be ascribed to the N/OFQ system, including the investigation of other active peptides that may be derived from the N/OFQ precursor (Figure 232). Nocistatin has been tested behaviorally and found to produce effects opposite to those of N/OFQ (Okuda-Ashitaka et al., 1998). In sum, these findings, coupled with the extensive anatomy of the system, suggest that the N/OFQ precursor plays a complex role in the brain that is yet to be fully appreciated. Not all cells that make a given precursor polypeptide store and release the same mixture of active opioid peptides, because of differential processing secondary to variations in the cellular complement of peptidases that produce and degrade the active opioid fragments (Akil et al., 1984).

In addition, processing of these peptides is altered by physiological demands, leading to a different mix of peptides being released by the same cell under different conditions. For example, chronic morphine treatment (Bronstein et al., 1990) or stress (Akil et al., 1985) can alter the forms of endorphin released by cells, which could possibly underlie some observed physiological adaptations. Although the endogenous opioid peptides appear to function as neurotransmitters, modulators of neurotransmission, or neurohormones, the full extent of their physiological role is not completely understood (Akil et al., 1988). The elucidation of the physiological roles of the opioid peptides has been made more difficult by their frequent coexistence with other putative neurotransmitters within a given neuron. Opioid Receptors Three classical opioid receptor types, , , and , have been studied extensively. The more recently discovered N/OFQ receptor, initially called the opioid-receptor-like 1 (ORL-1) receptor or "orphan" opioid receptor has added a new dimension to the study of opioids. Highly selective ligands that allowed for type-specific labeling of the three classical opioid receptors (e.g., DAMGO for , DPDPE for , and U-50,488 and U-69,593 for ) (Handa et al., 1981; Mosberg et al., 1983; Voightlander et al., 1983) became available in the early 1980s. These tools made possible the definition of ligand-binding characteristics of each of the receptor types and the determination of anatomical distribution of the receptors using autoradiographic techniques. Each major opioid receptor has a unique anatomical distribution in brain, spinal cord, and the periphery (Mansour et al., 1988; Neal et al., 1999b). These distinctive patterns of localization suggested possible functions that subsequently have been investigated in pharmacological and behavioral studies. The study of the biological functions of opioid receptors in vivo was aided by the synthesis of selective antagonists and agonists. Among the most commonly used antagonists are cyclic analogs of somatostatin such as CTOP as -receptor antagonists, a derivative of naloxone called naltrindole as a -receptor antagonist, and a bivalent derivative of naltrexone called binaltorphimine (nor-BNI) as a -receptor antagonist (Gulya et al., 1986; Portoghese et al., 1987; Portoghese et al., 1988). In general, functional studies using selective agonists and antagonists have revealed substantial parallels between and receptors and dramatic contrasts between / and receptors. In vivo infusions of selective antagonists and agonists also were used to establish the receptor types involved in mediating various opioid effects (Table 232). Most of the clinically used opioids are relatively selective for receptors, reflecting their similarity to morphine (Tables 233 and 234). However, it is important to note that drugs that are relatively selective at standard doses will interact with additional receptor subtypes when given at sufficiently high doses, leading to possible changes in their pharmacological profile. This is especially true as doses are escalated to overcome tolerance. Some drugs, particularly mixed agonist-antagonist agents, interact with more than one receptor class at usual clinical doses. The actions of these drugs are particularly interesting, since they may act as an agonist at one receptor and an antagonist at another. There is little agreement regarding the exact classification of opioid receptor subtypes. Pharmacological studies have suggested the existence of multiple subtypes of each receptor. The complex literature on opioid-receptor subtypes (see Akil and Watson, 1994) strongly suggests the presence of at least one additional subtype with good affinity for the benzomorphan class of opiate alkaloids. The data for -opioid receptor subtypes is intriguing. While early support for the possibility of multiple receptors came from radioligand-binding studies (Negri et al., 1991), the strongest evidence derives from behavioral studies (Jiang et al., 1991; Sofuoglu et al., 1991), which led to the proposal that two -receptor sites exist, 1 and 2. In the case of the receptor, behavioral

and pharmacological studies led to the proposal of 1 and 2 subtypes (Pasternak, 1986). The 1 site is proposed to be a very high affinity receptor with little discrimination between and ligands. A parallel hypothesis (Rothman et al., 1988) holds that there is a high affinity / complex rather than a distinct site. Although molecular cloning studies have not readily supported the existence of these subtypes as distinct molecules, recent findings (see Molecular Studies of Opioid Receptors and Their Ligands) regarding modified specificity for opioid ligands due to heterodimerization of receptors may provide an explanation for observed pharmacological diversity (Jordan and Devi, 1999). Molecular Studies of Opioid Receptors and Their Ligands For many years, the study of multiple opioid receptors greatly profited from the availability of a rich array of natural and synthetic ligands but was limited by the absence of opioid receptor clones. In 1992, the mouse receptor was cloned from the NG-108 cell line (Evans et al., 1992; Kieffer et al., 1992). Subsequently, the other two major types of classical opioid receptors were cloned from various rodent species (Chen et al., 1993; Kong et al., 1994; Meng et al., 1993; Minami et al., 1993; Thompson et al., 1993; Wang et al., 1993; Yasuda et al., 1993). The N/OFQ receptor was cloned as a result of searches for novel types or subtypes of opioid receptors. The coding regions for the opioid-peptide receptors subsequently were isolated and chromosomally assigned (Befort et al., 1994; Yasuda et al., 1994; Wang et al., 1994). In the case of , the cloned sequence is the classical morphine-like receptor, rather than the proposed 1. With , no differentiation between the two proposed types by binding appears possible, and the cloned receptor recognizes all -selective ligands regardless of their behavioral assignment as 1 or 2. For , the cloned receptor is the classical receptor, rather than the proposed benzomorphan binding site. All four opioid receptors belong to the G protein-coupled receptor (GPCR) family (see Chapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect) and share extensive sequence homologies (Figure 233). The N/OFQ receptor has high structural homology with the classical opioid receptors, but it has very low or no affinity for binding conventional opioid ligands (Bunzow et al., 1994; Chen et al., 1994; Mollereau et al., 1994). The structural similarities of the N/OFQ receptor and the three classical opioid receptors are highest in the transmembrane regions and cytoplasmic domains and lowest in the extracellular domains critical for ligand selectivity (Figure 233B). Figure 233. A. Structural Homology among the Three Opioid Receptors. B. Structural Homology among the Three Opioid Receptors and the N/OFQ Receptor. (From Akil et al., 1998, with permission.) Numbers indicate the percent of identical amino acids in the segment.

It is possible that further cloning experiments may identify unique genes encoding opioid receptor subtypes. However, it has been suggested that, if multiple opioid receptor subtypes exist, they could be derived from a single gene, and multiple mechanisms might exist to achieve distinct pharmacological profiles. Two potential pathways to opioid receptor diversity are alternative splicing of receptor RNA and dimerization of receptor proteins. Alternative splicing of receptor heteronuclear RNA (e.g., exon skipping and intron retention) is thought to play an important role in producing in vivo diversity within many members of the GPCR superfamily (Kilpatrick et al., 1999). Splice variants may exist within each of the three opioid receptor families, and this alternative splicing of receptor transcripts may be critical for the diversity of opioid receptors. A technique widely used to identify potential sites of alternative splicing is antisense oligodeoxynucleotide (ODN) mapping. The ability of antisense ODNs to target specific regions of cDNA permits the systematic evaluation of the contribution of individual exons to observed receptor properties. Antisense ODN-targeting of exon 1 of the rat and mouse opioid receptors blocks morphine analgesia in these species (Rossi et al., 1995; Rossi et al., 1996a; Rossi et al., 1997). By contrast, administration of antisense ODNs targeting exon 2 does not block morphine analgesia but prevents the analgesia produced by heroin, fentanyl, and the morphine metabolite morphine-6-glucuronide (Rossi et al., 1995; Rossi et al., 1996a; Rossi et al., 1997). An analogous disruption of morphine-6-glucuronideinduced but not morphine-induced analgesia is observed following administration of antisense ODNs targeting exon 3 (Rossi et al., 1997). These

results imply that unique receptor mechanisms mediate the analgesic effects of a variety of opioids and are consistent with the claim that these unique receptor mechanisms could be achieved via alternative splicing. The use of antisense ODNs also has led to the identification of potential sites for splice variation in the - and -opioid receptors (Pasternak and Standifer, 1995). Central to the claim that these results reflect the existence of splice variants is the in vivo isolation of such variants. A -opioid receptor splice variant has been identified that differs considerably from the native receptor within its C-terminus (Zimprich et al., 1995). As might be expected on the basis of the splicing location, this variant exhibits a binding profile similar to that of the cloned -opioid receptor but does not readily undergo the desensitization frequently observed following exposure to agonist. Thus, the existence of this splice variant cannot explain the differential analgesic sensitivities described above. However, just such a variant was detected in mice with a targeted disruption of exon 1 (Schuller et al., 1999). Transcripts of the -opioid receptor that contained exons 2 and 3 were identified in these mice. Moreover, whereas morphine-induced analgesia was abolished, heroin- and M6G-induced analgesia were unaffected. The interaction of two receptors to form a unique structure (dimerization) also has been accorded an important role in regulating receptor function. For example, dimerization of GABABR1 and GABABR2 subunits is required to form a functional GABAB receptor for gamma-aminobutyric acid (e.g., Jones et al., 1998). Both cloned - and -opioid receptors have been shown to exist in vitro as homodimers (Cvejic and Devi, 1997). However, the most interesting findings have been generated by studies showing dimerization between different opioid receptor types. Jordan and Devi (1999) showed that - and -opioid receptors can exist as heterodimers both in heterologous expression systems and in brain, based on coimmunoprecipitation studies. The dimerization of these receptors profoundly alters their pharmacological properties. The affinity of the heterodimers for highly selective agonists and antagonists is greatly reduced. Instead, the heterodimers show greatest affinity for partially selective agonists such as bremazocine, suggesting that receptor hetero dimerization may explain at least part of the discrepancy between molecular and pharmacological properties of opioid receptors. Given the existence of four families of endogenous ligands and cloned receptors, it seems reasonable to ask if there is a one-to-one correspondence between them. Previous studies using brain homogenates demonstrated that an orderly pattern of association between a set of opioid gene products and a given receptor does not exist. Although proenkephalin products generally are associated with and prodynorphin products with receptors, much "cross-talk" is present (Mansour et al., 1995). The cloning of the opioid receptors allowed this question to be addressed more systematically, since each receptor could be expressed separately and then compared side by side under identical conditions (Mansour et al., 1997). The receptor exhibits the most selectivity across endogenous ligands, with affinities ranging from 0.1 nM for dynorphin A to approximately 100 nM for leu-enkephalin. In contrast, and receptors show only a 10-fold difference between the most- and least-preferred ligand, with a majority of endogenous ligands exhibiting greater affinity for than for receptors. The limited selectivity of and receptors suggests that the and receptor recognize principally the Tyr-Gly-Gly-Phe core of the endogenous peptide, whereas the receptor requires this core and the arginine in position 6 of dynorphin A and other prodynorphin products (see Table 231). Interestingly, proenkephalin products with arginine in position 6 (i.e., met-enkephalin-Arg-Phe and met-enkephalin-Arg-Gly-Leu) are equally good -receptor ligands, arguing against the idea of a unique association between a given receptor and a given opioid precursor family. In sum, high affinity interactions are possible between each of the peptide precursor families and each of the three receptor types, the only exception being the lack of highaffinity interaction between POMC-derived peptides and receptors. Otherwise, at least one peptide product from each of the families exhibits high affinity (low nanomolar or subnanomolar) for each receptor. The relatively unimpressive affinity of the receptor toward all known endogenous

ligands suggests that its most avid and selective ligand has not been identified, a notion being put to test (seebelow). Endomorphins The search for a high affinity/high selectivity endogenous ligand for the receptor led to the discovery of a class of novel endogenous opioids termed endomorphins (Zadina et al., 1997). Endomorphin-1 and endomorphin-2 are tetrapeptides with the sequences Tyr-Pro-Trp-Phe and TyrPro-Phe-Phe, respectively (Table 231). These novel peptides do not contain the canonical opioid core (Tyr-Gly-Gly-Phe) but nevertheless bind the receptor with very high affinity and selectivity. However, an endomorphin gene has yet to be cloned, and much remains to be learned about the endomorphins' anatomical distribution, mode of interaction with the opioid receptors, function in vivo, and the potential existence of other related peptides that are highly selective for each of the opioid receptors. Molecular Basis for Opioid Receptor Selectivity and Affinity Previous studies of other peptide receptors suggested that peptides and small molecules may bind to GPCRs differently. Mutagenesis studies of small ligand receptors (e.g., adrenergic and dopamine receptors) showed that charged amino acid residues in the transmembrane domains were important in receptor binding and activation (Strader et al., 1988; Mansour et al., 1992). This observation places the bound ligands within the receptor core formed by the transmembrane helices. On the other hand, studies with peptidergic receptors have demonstrated a critical role for extracellular loops in ligand recognition (Xie et al., 1990). All three classical opioid receptors appear to combine both properties: Charged residues located in transmembrane domains have been implicated in the high affinity binding of most opioid ligands, whether alkaloid or peptide (Surratt et al., 1994; Mansour et al., 1997). However, critical interactions of opioid peptides with the extracellular domains also have been shown. The opioid peptide Tyr-Gly-Gly-Phe core, sometimes termed the "message," appears to be necessary for interaction with the receptor-binding pocket; however, peptide selectivity resides in the carboxy-terminal extension beyond the tetrapeptide core, providing the "address" (Schwyzer, 1986). When the carboxy-terminal domain is long, it may interact with extracellular loops of the receptors, contributing to selectivity in a way that cannot be achieved by the much smaller alkaloids. Indeed, dynorphin A selectivity is dependent on the second extracellular loop of the receptor (Kong et al., 1994; Xue et al., 1994; Meng et al., 1995), whereas - and -selective ligands have more complex mechanisms of selectivity that depend on multiple extracellular loops. These findings have led to the proposal that high selectivity is achieved by both attraction to the most-favored receptor and repulsion by the less-favored receptor (Watson et al., 1995; Meng et al., 1995). For example, the N/OFQ receptor does not bind any of the classical endogenous opioid peptides. However, mutating as few as four amino acids endows the N/OFQ receptor with the ability to recognize prodynorphin-derived peptides while retaining recognition of N/OFQ (Meng et al., 1996), suggesting that unique mechanisms have evolved to ensure selectivity of the N/OFQ receptor for N/OFQ and against classical opioid peptides. Mechanisms involved in selectivity can be difficult to separate from mechanisms involved in affinity, because the extracellular domains may not only allow interactions with the peptide ligands but also may be important in stabilizing these interactions. Results of the research discussed above imply that the alkaloids are small enough to fit completely inside or near the mouth of the receptor core, while peptides bind to the extracellular loops and simultaneously extend to the receptor core to activate the common binding site. That one can truly

separate the binding of peptides and alkaloids is demonstrated most clearly by a genetically engineered receptor (Coward et al., 1998), which does not recognize endogenous peptide ligands, yet retains full affinity and efficacy for small synthetic -receptor ligands, such as spiradoline. Given these differences in binding interactions with the receptor, it is possible that unique classes of ligands may activate the opioid receptor differently, leading to conformational changes of distinct quality or duration that may result in varying magnitudes and possibly different second-messenger events. This hypothesis currently is being tested and, if validated, may lead to novel strategies for differentially altering the interactions between the opioid receptors and signal transduction cascades. With the potential presence of receptor heterodimers and the likelihood that they have unique profiles and signaling properties (Jordan and Devi, 1999), there now are a number of new directions for discovery of drugs that may target receptors in particular states. Opioid Receptor Signaling and Consequent Intracellular Events Coupling of Opioid Receptors to Second Messengers The , , and receptors in endogenous neuronal settings are coupled, via pertussis toxin-sensitive GTP-binding proteins, to inhibition of adenylyl cyclase activity (Herz, 1993), activation of receptoroperated K+ currents, and suppression of voltage-gated Ca2+ currents (Duggan and North, 1983). The hyperpolarization of the membrane potential by K+-current activation and the limiting of Ca2+ entry by suppression of Ca2+ currents are tenable but unproven mechanisms for explaining blockade by opioids of neurotransmitter release and pain transmission in varying neuronal pathways. Studies with cloned receptors have shown that opioid receptors may couple to an array of other second messenger systems, including activation of the MAP kinases and the phospholipase C (PLC)mediated cascade leading to the formation of inositol trisphosphate and diacylglycerol (see Akil et al., 1997, for review). Prolonged exposure to opioids results in adaptations at multiple levels within these signaling cascades. The significance of these cellular-level adaptations lies in the causal relationship that may exist between them and adaptations seen at the organismic level such as tolerance, sensitization, and withdrawal. Receptor Desensitization, Internalization, and Sequestration Following Chronic Exposure to Opioids Transient administration of opioids leads to a phenomenon termed acute tolerance, whereas sustained administration leads to the development of "classical" or chronic tolerance. Tolerance simply refers to a decrease in effectiveness of a drug with its repeated administration. Recent studies have focused on cellular mechanisms of acute tolerance. Several investigators have shown that short-term desensitization probably involves phosphorylation of the and receptors via protein kinase C (Mestek et al., 1995; Narita et al., 1995; Ueda et al., 1995). A number of other kinases also have been implicated, including protein kinase A and -adrenergic receptor kinase, ARK (Pei et al., 1995; Wang et al., 1994; also, see below). Like other GPCRs, both and receptors can undergo rapid agonist-mediated internalization via a classic endocytic pathway (Trapaidze et al., 1996; Gaudriault et al., 1997), whereas receptors do not internalize following prolonged agonist exposure (Chu et al., 1997). Interestingly, it seems that internalization occurs via partially distinct endocytic pathways for the and receptors, suggesting receptor-specific interactions with different mediators of intracellular trafficking (Gaudriault et al., 1997). It also is intriguing that these processes may be induced differentially as a function of the structure of the ligand. For example, certain agonists, such as etorphine and enkephalins, cause rapid internalization of the receptor, while morphine, although it decreases adenylyl cyclase activity equally well, does not cause receptor internalization (Keith et al., 1996). In addition, a

truncated receptor with normal G protein coupling was shown to recycle constitutively from the membrane to cytosol (Segredo et al., 1997), further indicating that activation of signal transduction and internalization are controlled by distinct molecular mechanisms. These studies also support the hypothesis that different ligands induce different conformational changes in the receptor that result in divergent intracellular events, and they may provide an explanation for differences in the efficacy and abuse potential of various opioids. One of the most interesting studies to evaluate the relevance of these alterations in signaling to the adaptations seen in response to opioid exposure in vivo was the demonstration that acute morphine-induced analgesia was enhanced in mice lacking -arrestin 2 (Bohn et al., 1999). Opioid-receptor internalization is mediated, at least partially, by the actions of GPCR kinases (GRKs). GRKs selectively phosphorylate agonist-bound receptors, thereby promoting interactions with -arrestins, which interfere with G protein coupling and promote receptor internalization (Bohn et al., 1999). Enhanced analgesia in mice lacking -arrestin 2 is consistent with a role for the GRKs and arrestins in regulating responsivity to opioids in vivo. This result is even more intriguing given the inability of morphine to support arrestin translocation and receptor internalization in vitro (Whistler and von Zastrow, 1998). Traditionally, long-term tolerance has been thought to be associated with increases in adenylyl cyclase activitya counter-regulation to the decrease in cyclic AMP levels seen after acute opioid administration (Sharma et al., 1977). Chronic treatment with -receptor opioids causes superactivation of adenylyl cyclase (Avidor-Reiss et al., 1996). This effect is prevented by pretreatment with pertussis toxin, demonstrating involvement of Gi/o proteins, and also by cotransfection with scavengers of G protein- dimers, indicating a role for this complex in superactivation. Alterations in levels of cyclic AMP clearly bring about numerous secondary changes (see Nestler and Aghajanian, 1997). An "Apparent Paradox" A paradox in evaluating the function of endogenous opioid systems is that a host of endogenous ligands activate a small number of opioid receptors. This pattern is different from that of many other neurotransmitter systems, where a single ligand interacts with a large number of receptors having different structures and second messengers. Is this richness and complexity at the presynaptic level lost as multiple opioid ligands derived from different genes converge on only three receptors, or is this richness preserved through means yet to be discovered? One possibility is that all opioid receptors have not been revealed by molecular cloning. Other options include splice variants, dimerization, and posttranslational modification, as discussed previously. Even assuming that other receptors and variants will be found, the binding of many endogenous ligands to the three cloned classical receptors suggests a great deal of convergence. However, this convergence may be only apparent, since multiple mechanisms for achieving distinctive responses in the context of the biology described above may exist. Some issues to consider are as follows: 1. The duration of action of endogenous ligands may be a critical variable that has been overlooked and that may have clinical relevance. 2. The pattern or profile of activation of multiple receptors by a ligand, rather than activation of a single receptor, may be a critical determinant of effect. 3. Opioid genes may give rise to multiple active peptides with unique profiles of activity.This patterning may be very complex and regulatable by various stimuli. 4. Patterns and/or efficacy of intracellular signaling produced by endogenous ligands at opioid receptors are under investigation (Emmerson et al., 1996). This issue may be particularly relevant for understanding physiological alterations following chronic administration of exogenous opioids. 5. Intracellular trafficking of the receptors may vary both as a function of the receptor and the

ligand. This could have interesting implications for long-term adaptations during sustained treatment with opioids and following their withdrawal. Understanding the complexity of endogenous opioid peptides and their patterns of interaction with multiple opioid receptors may help define the similarities and differences between the endogenous modulation of these systems and their activation by drugs. These insights could be important in devising treatment strategies that maximize beneficial properties of opioids (e.g., pain relief) while limiting their undesirable side effects such as tolerance, dependence, and addiction. Effects of Clinically Used Opioids Morphine and most other clinically used opioid agonists exert their effects through opioid receptors. These drugs affect a wide range of physiological systems. They produce analgesia, affect mood and rewarding behavior (see also Chapter 24: Drug Addiction and Drug Abuse), and alter respiratory, cardiovascular, gastrointestinal, and neuroendocrine function. -Opioid receptor agonists also are potent analgesics in animals, and in isolated cases have proved useful in human beings (Coombs et al., 1985). The main barrier to the clinical use of agonists is that most of the available agents are peptides and do not cross the bloodbrain barrier, thus requiring intraspinal administration. However, much effort currently is being devoted to the development of clinically useful agonists. -Selective agonists produce analgesia that has been shown in animals to be mediated primarily at spinal sites. Respiratory depression and miosis may be less severe with agonists. Instead of euphoria, -receptor agonists produce dysphoric and psychotomimetic effects (Pfeiffer et al., 1986). In neural circuitry mediating both reward and analgesia, and agonists have been shown to have antagonistic effects (see below). Mixed agonist-antagonist compounds were developed for clinical use with the hope that they would have less addictive potential and less respiratory depression than morphine and related drugs. In practice, however, it has turned out that for the same degree of analgesia, the same intensity of side effects will occur. (American Pain Society, 1999). A "ceiling effect," limiting the amount of analgesia attainable, often is seen with these drugs. Some mixed agonist-antagonist drugs, such as pentazocine and nalorphine, can produce severe psychotomimetic effects that are not reversible with naloxone (suggesting that these undesirable side effects are not mediated through classical opioid receptors). Also, pentazocine and nalorphine can precipitate withdrawal in opioid-tolerant patients. For these reasons, the clinical use of these mixed agonist-antagonist drugs is limited. Analgesia In human beings, morphine-like drugs produce analgesia, drowsiness, changes in mood, and mental clouding. A significant feature of the analgesia is that it occurs without loss of consciousness. When therapeutic doses of morphine are given to patients with pain, they report that the pain is less intense, less discomforting, or entirely gone; drowsiness commonly occurs. In addition to relief of distress, some patients experience euphoria. When morphine in the same dose is given to a normal, pain-free individual, the experience may be unpleasant. Nausea is common, and vomiting also may occur. There may be feelings of drowsiness, difficulty in mentation, apathy, and lessened physical activity. As the dose is increased, the subjective, analgesic, and toxic effects, including respiratory depression, become more pronounced. Morphine does not have anticonvulsant activity and usually does not cause slurred speech, emotional lability, or significant motor incoordination. The relief of pain by morphine-like opioids is relatively selective, in that other sensory modalities

are not affected. Patients frequently report that the pain is still present, but that they feel more comfortable (see section on Therapeutic Uses of Opioid Analgesics). Continuous, dull pain is relieved more effectively than sharp, intermittent pain, but with sufficient amounts of opioid it is possible to relieve even the severe pain associated with renal or biliary colic. Any meaningful discussion of the action of analgesic agents must include some distinction between pain as a specific sensation, subserved by distinct neurophysiological structures, and pain as suffering (the original sensation plus the reactions evoked by the sensation). It is generally agreed that all types of painful experiences, whether produced experimentally or occurring clinically as a result of pathology, include both the original sensation and the reaction to that sensation. It also is important to distinguish between pain caused by stimulation of nociceptive receptors and transmitted over intact neural pathways (nociceptive pain) and pain that is caused by damage to neural structures, often involving neural supersensitivity (neuropathic pain). Although nociceptive pain usually is responsive to opioid analgesics, neuropathic pain typically responds poorly to opioid analgesics and may require higher doses of drug (McQuay, 1988). In clinical situations, pain cannot be terminated at will, and the meaning of the sensation and the distress it engenders are markedly affected by the individual's previous experiences and current expectations. In experimentally produced pain, measurements of the effects of morphine on pain threshold have not always been consistent; some workers find that opioids reliably elevate the threshold, while many others do not obtain consistent changes. In contrast, moderate doses of morphine-like analgesics are effective in relieving clinical pain and increasing the capacity to tolerate experimentally induced pain. Not only is the sensation of pain altered by opioid analgesics, but the affective response is changed as well. This latter effect is best assessed by asking patients with clinical pain about the degree of relief produced by the drug administered. When pain does not evoke its usual responses (anxiety, fear, panic, and suffering), a patient's ability to tolerate the pain may be markedly increased even when the capacity to perceive the sensation is relatively unaltered. It is clear, however, that alteration of the emotional reaction to painful stimuli is not the sole mechanism of analgesia. Intrathecal administration of opioids can produce profound segmental analgesia without causing significant alteration of motor or sensory function or subjective effects (Yaksh, 1988). Mechanisms and Sites of Opioid-Induced Analgesia While cellular and molecular studies of opioid receptors are invaluable in understanding their function, it is critical to place them in their anatomical and physiological context to fully understand the opioid system. Pain control by opioids needs to be considered in the context of brain circuits modulating analgesia and the functions of the various receptor types in these circuits. Excellent reviews of this topic are available (Fields et al., 1991; Harris, 1996). It has been well established that the analgesic effects of opioids arise from their ability to inhibit directly the ascending transmission of nociceptive information from the spinal cord dorsal horn and to activate pain control circuits that descend from the midbrain, via the rostral ventromedial medulla, to the spinal cord dorsal horn. Opioid peptides and their receptors are found throughout these descending pain control circuits (Mansour et al., 1995; Gutstein et al., 1998). -Opioid receptor mRNA and/or ligand binding is seen throughout the periaqueductal grey (PAG), pontine retricular formation, median raphe, nucleus raphe magnus, and adjacent gigantocellular reticular nucleus in the rostral ventromedial medulla (RVM) and spinal cord. Evaluation of discrepancies between levels of ligand binding and mRNA expression provides important insights into the mechanisms of -opioid receptor-mediated analgesia. For instance, the presence of significant opioid receptor ligand binding in the superficial dorsal horn but scarcity of mRNA expression

(Mansour et al., 1995) suggests that the majority of these spinal -receptor ligand binding sites are located presynaptically on the terminals of primary afferent nociceptors. This conclusion is consistent with the high levels of -opioid receptor mRNA observed in dorsal root ganglia (DRG). A similar mismatch between -receptor ligand binding and mRNA expression is seen in the dorsolateral PAG (a high level of binding and sparse mRNA) (Gutstein et al., 1998). -Opioid receptor mRNA and ligand binding have been demonstrated in the ventral and ventrolateral quadrants of the PAG, the pontine reticular formation, and the gigantocellular reticular nucleus, but only low levels are seen in the median raphe and nucleus raphe magnus. As with the -opioid receptor, there are significant numbers of -opioid receptor binding sites in the dorsal horn but no detectable mRNA expression, suggesting an important role for presynaptic actions of the -opioid receptor in spinal analgesia. -Opioid receptor mRNA and ligand binding are widespread throughout the PAG, pontine reticular formation, median raphe, nucleus raphe magnus and adjacent gigantocellular reticular nucleus. Again, -receptor ligand binding but minimal mRNA have been found in the dorsal horn. Although all three receptor mRNAs are found in the DRG, they are localized on different types of primary afferent cells. -Opioid receptor mRNA is present in medium and large diameter DRG cells, -opioid receptor mRNA in large diameter cells, and opioid receptor mRNA in small and medium diameter cells (Mansour et al., 1995). This differential localization might be linked to functional differences in pain modulation. The distribution of opioid receptors in descending pain control circuits indicates substantial overlap between and receptors. Receptors and receptors are most anatomically distinct from the opioid receptor in the PAG, median raphe, and nucleus raphe magnus (Gutstein et al., 1998). A similar differentiation of and receptors from is seen in the thalamus, suggesting that interactions between the receptor and the receptor may be important for modulating nociceptive transmission from higher nociceptive centers as well as in the spinal cord dorsal horn. The actions of -receptor agonists are invariably analgesic, whereas those of -receptor agonists can be either analgesic or antianalgesic. Consistent with the anatomical overlap between the and receptors, the antianalgesic actions of the -receptor agonists appear to be mediated by functional antagonism of the actions of -receptor agonists. The receptor produces analgesia within descending pain control circuits, at least in part, by the removal of GABAergic inhibition of RVM-projecting neurons in the PAG and spinally projecting neurons in the RVM (Fields et al., 1991). The pain-modulating effects of the -receptor agonists in the brainstem appear to oppose those of -receptor agonists. Application of a -opioid agonist hyperpolarizes the same RVM neurons that are depolarized by a -opioid agonist, and microinjections of a -receptor agonist into the RVM antagonize the analgesia produced by microinjections of agonists into this region (Pan et al., 1997). This is the strongest evidence to date demonstrating that opioids can have antianalgesic as well as analgesic effects. This finding may explain behavioral evidence for a reduction in hyperalgesia that follows injections of naloxone under certain circumstances. As mentioned above, there is significant opioid-receptor ligand binding, and little detectable receptor mRNA expression in the spinal cord dorsal horn, but high levels of opioid-receptor mRNA in DRG. This distribution might suggest that the actions of opioid-receptor agonists relevant to analgesia at the spinal level are predominantly presynaptic. At least one presynaptic mechanism with potential clinical significance is inhibition of spinal tachykinin signaling. It is well known that opioids decrease the pain-evoked release of tachykinins from primary afferent nociceptors (Jessell and Iversen, 1977; Yaksh et al., 1980). Recently, the significance of this effect has been questioned. Trafton et al. (1999) have demonstrated that at least 80% of tachykinin signaling in response to noxious stimulation remains intact after the intrathecal administration of large doses of opioids. These results suggest that, while opioid administration may reduce tachykinin release from primary afferent nociceptors, this reduction has little functional impact on the actions of tachykinins on postsynaptic pain-transmitting neurons. This implies that either tachykinins are not central to pain

signaling and/or opioid-induced analgesia at the spinal level or that, contrary to the conclusions suggested by anatomical studies, presynaptic opioid actions may be of little analgesic significance. Just as important insights have been made into mechanisms of opioid-induced analgesia at the brainstem and spinal levels, progress also has been made in understanding forebrain mechanisms. It is well known that the actions of opioids in bulbospinal pathways are critical to their analgesic efficacy. The precise role of forebrain actions of opioids and whether or not these actions are independent of those in bulbospinal pathways are less well defined. It is clear that opioid actions in the forebrain contribute to analgesia, because decerebration prevents analgesia when rats are tested for pain sensitivity using the formalin test (Matthies and Franklin, 1992), and microinjection of opioids into several forebrain regions are analgesic in this test (Manning et al., 1994). However, because these manipulations frequently do not change the analgesic efficacy of opioids in measures of acute phasic nociception, such as the tailflick test, a distinction has been made between forebrain-dependent mechanisms for morphine-induced analgesia in the presence of tissue injury and bulbospinal mechanisms for this analgesia in the absence of tissue injury. In an important series of experiments, Manning and Mayer (1995a; 1995b) have shown that this distinction is not absolute. Analgesia induced by systemic administration of morphine in both the tailflick and formalin tests was disrupted either by lesioning or reversibly inactivating the central nucleus of the amygdala, demonstrating that opioid actions in the forebrain contribute to analgesia in measures of tissue damage as well as acute, phasic nociception. The involvement of the amygdala in analgesia is intriguing, as the amygdala has been implicated in the environmental activation of pain control circuits, and it projects extensively to brainstem regions involved in descending pain control (Manning and Mayer, 1995a; 1995b). Simultaneous administration of morphine at both spinal and supraspinal sites results in synergy in analgesic response, with a tenfold reduction in the total dose of morphine necessary to produce equivalent analgesia at either site alone. The mechanisms responsible for spinal/supraspinal synergy are readily distinguished from those involved with supraspinal analgesia (Pick et al., 1992a). In addition to the well-described spinal/supraspinal synergy, synergistic / - and / -agonist interactions also have been observed within the brainstem between the periaqueductal gray, locus coeruleus, and nucleus raphe magnus (Rossi et al., 1993). Opioids also can produce analgesia when administered peripherally. Opioid receptors are present on peripheral nerves (Fields et al., 1980) and will respond to peripherally applied opioids and locally released endogenous opioid compounds when "up-regulated" during inflammatory pain states (Stein et al., 1991; Stein, 1993). During inflammation, immune cells capable of releasing endogenous opioids are present near sensory nerves, and a perineural defect allows opioids access to the nerves (Stein, 1993; Stein, 1995). It appears that this also may occur in neuropathic pain models (Kayser et al., 1995), perhaps because of the presence of immune cells near damaged nerves (Monaco et al., 1992) and perineural defects extant in these conditions. The Role of N/OFQ and Its Receptor in Pain Modulation N/OFQ mRNA and peptide are present throughout descending pain control circuits. For instance, N/OFQ-containing neurons are present in the PAG, the median raphe, throughout the RVM, and in the superficial dorsal horn (Neal et al., 1999b). This distribution overlaps with that of opioid peptides, but the extent of colocalization remains unclear. N/OFQ-receptor ligand binding and mRNA are seen in the PAG, median raphe, and RVM (Neal et al., 1999a). Spinally, there is stronger N/OFQ-receptor mRNA expression in the ventral horn than in the dorsal horn, but higher levels of ligand binding in the dorsal horn. There also are high N/OFQ-receptor mRNA levels in the

DRG. Despite clear anatomical evidence for a role of the N/OFQ system in pain modulation, its function remains unclear. Targeted disruption of the N/OFQ receptor in mice had little effect on basal pain sensitivity in several measures, whereas targeted disruption of the N/OFQ precursor consistently elevated basal responses in the tailflick test, suggesting an important role for N/OFQ in regulating basal pain sensitivity (Nishi et al., 1997; Koster et al., 1999). Intratheca1 injections of N/OFQ have been shown to be analgesic (Yamamoto et al., 1997; Xu et al., 1996); however, supraspinal administration has produced either hyperalgesia, antiopioid effects, or a biphasic hyperalgesic/analgesic response (Rossi et al., 1996b, Rossi et al., 1997; Grisel et al., 1996). These conflicting findings may be explained in part by a study in which it was shown that N/OFQ inhibits both pain-facilitating and analgesia-facilitating neurons in the RVM (Pan et al., 2000). Activation of endogenous analgesic circuitry was blocked by administration of N/OFQ. If the animal was hyperalgesic, the enhanced pain sensitivity also was blocked by N/OFQ. Thus, the effects of N/OFQ on pain responses appear to depend on the preexisting state of pain in the animal. Mood Alterations and Rewarding Properties The mechanisms by which opioids produce euphoria, tranquility, and other alterations of mood (including rewarding properties) are not entirely clear. However, the neural systems that mediate opioid reinforcement are distinct from those involved in physical dependence and analgesia (Koob and Bloom, 1988). Behavioral and pharmacological evidence points to the role of dopaminergic pathways, particularly involving the nucleus accumbens (NAcc), in drug-induced reward. There is ample evidence for interactions between opioids and dopamine in mediating opioid-induced reward. A full appreciation of mechanisms of drug-induced reward requires a more complete understanding of the NAcc and related structures at the anatomical level as well as a careful examination of the interface between the opioid system and dopamine receptors. The NAcc, portions of the olfactory tubercle, and the ventral and medial portions of the caudate-putamen constitute an area referred to as the ventral striatum (Heimer et al., 1982). The ventral striatum is implicated in motivation and affect (limbic functions), while the dorsal striatum is involved in sensorimotor and cognitive functions (Willner et al., 1991). Both the dorsal and ventral striatum are heterogeneous structures that can be subdivided into distinct compartments. In the middle and caudal third of the NAcc, the characteristic distribution of neuroactive substances results in two unique compartments termed the core and the shell (Zahm and Heimer, 1988; Heimer et al., 1991). It is important to note that other reward-relevant brain regions (e.g., the lateral hypothalamus and medial prefrontal cortex) implicated with a variety of abused drugs are connected reciprocally to the shell of the NAcc. Thus, the shell of the NAcc is the site that may be involved directly in the emotional and motivational aspects of drug-induced reward. Prodynorphin- and proenkephalin-derived opioid peptides are expressed primarily in output neurons of the striatum and NAcc. All three opioid receptor types are present in the NAcc (Mansour et al., 1988) and are thought to mediate, at least in part, the motivational effects of opiate drugs. Selective - and -receptor agonists are rewarding when defined by place preference (Shippenberg et al., 1992) and intracranial self-administration (Devine and Wise, 1994) paradigms. Conversely, selective -receptor agonists produce aversive effects (Cooper, 1991; Shippenberg et al., 1992). Naloxone and selective antagonists also produce aversive effects (Cooper, 1991). Positive motivational effects of opioids are partially mediated by dopamine release at the level of the NAcc. Thus, -receptor activation in these circuits inhibits dopamine release (Mulder et al., 1991; Mulder and Schoffelmeer, 1993), while - and -receptor activation increases dopamine release (Chesselet et al., 1983; Devine et al., 1993). Distinctive cell clusters in the shell of the accumbens contain

proenkephalin, prodynorphin, receptors, and receptors as well as dopamine receptors. These clusters possibly could be a region where the motivational properties of dopaminergic and opioid drugs are processed. The potential role of these structures and the neural circuits in which they are embedded in the rewarding effects of opioids will be of great interest. The locus ceruleus (LC) contains both noradrenergic neurons and high concentrations of opioid receptors and is postulated to play a critical role in feelings of alarm, panic, fear, and anxiety. Neural activity in the LC is inhibited by both exogenous opioids and endogenous opioid-like peptides. Other CNS Effects While opioids are used clinically primarily for their pain-relieving properties, they produce a host of other effects. This is not surprising in view of the wide distribution of opioids and their receptors, both in the brain and in the periphery. A brief summary of some of these effects is presented below. High doses of opioids can produce muscular rigidity in human beings. Chest wall rigidity severe enough to compromise respiration is not uncommon during anesthesia with fentanyl, alfentanil, remifentanil, and sufentanil (see Monk et al., 1988). Opioids and endogenous peptides cause catalepsy, circling, and stereotypical behavior in rats and other animals. Effects on the Hypothalamus Opioids alter the equilibrium point of the hypothalamic heat-regulatory mechanisms, such that body temperature usually falls slightly. However, chronic high dosage may increase body temperature (see Martin, 1983). Neuroendocrine Effects Morphine acts in the hypothalamus to inhibit the release of gonadotropin-releasing hormone (GnRH) and corticotropin-releasing factor (CRF), thus decreasing circulating concentrations of luteinizing hormone (LH), follicle-stimulating hormone (FSH), ACTH, and -endorphin; the last two peptides usually are released simultaneously from corticotropes in the pituitary. As a result of the decreased concentrations of pituitary trophic hormones, the concentrations of testosterone and cortisol in plasma decline. Secretion of thyrotropin is relatively unaffected. The administration of agonists increases the concentration of prolactin in plasma, probably by reducing the dopaminergic inhibition of its secretion. Although some opioids enhance the secretion of growth hormone, the administration of morphine or -endorphin has little effect on the concentration of the hormone in plasma. With chronic administration, tolerance develops to the effects of morphine on hypothalamic releasing factors. Observations in patients maintained on methadone reflect this phenomenon; in women, menstrual cycles that had been disrupted by intermittent use of heroin return to normal; in men, circulating concentrations of LH and testosterone are usually within the normal range. Although -receptor agonists inhibit the release of antidiuretic hormone and cause diuresis, the administration of -opioid agonists tends to produce antidiuretic effects in human beings. The effects of opioids on neuroendocrine function have been reviewed by (Howlett and Rees, 1986) and by (Grossman, 1988). Miosis

Morphine and most and agonists cause constriction of the pupil by an excitatory action on the parasympathetic nerve innervating the pupil. Following toxic doses of agonists, the miosis is marked and pinpoint pupils are pathognomonic; however, marked mydriasis occurs when asphyxia intervenes. Some tolerance to the miotic effect develops, but addicts with high circulating concentrations of opioids continue to have constricted pupils. Therapeutic doses of morphine increase accommodative power and lower intraocular tension in both normal and glaucomatous eyes. Convulsions In animals, high doses of morphine and related opioids produce convulsions. Several mechanisms appear to be involved, and different types of opioids produce seizures with different characteristics. Morphine-like drugs excite certain groups of neurons, especially hippocampal pyramidal cells; these excitatory effects probably result from inhibition of the release of GABA by interneurons (see McGinty and Friedman, 1988). Selective agonists produce similar effects. These actions may contribute to the seizures that are produced by some agents at doses only moderately higher than those required for analgesia, especially in children. However, with most opioids, convulsions occur only at doses far in excess of those required to produce profound analgesia, and seizures are not seen when potent agonists are used to produce anesthesia. Naloxone is more potent in antagonizing convulsions produced by some opioids (e.g., morphine, methadone, and propoxyphene) than those produced by others (e.g., meperidine). The production of convulsant metabolites of the latter agent may be partially responsible (see below). Anticonvulsant agents may not always be effective in suppressing opioid-induced seizures. Respiration Morphine-like opioids depress respiration, at least in part by virtue of a direct effect on the brainstem respiratory centers. The respiratory depression is discernible even with doses too small to disturb consciousness and increases progressively as the dose is increased. In human beings, death from morphine poisoning is nearly always due to respiratory arrest. Therapeutic doses of morphine in human beings depress all phases of respiratory activity (rate, minute volume, and tidal exchange) and also may produce irregular and periodic breathing. The diminished respiratory volume is due primarily to a slower rate of breathing, and with toxic amounts the rate may fall to 3 or 4 breaths per minute. Although effects on respiration are readily demonstrated, clinically significant respiratory depression rarely occurs with standard morphine doses in the absence of underlying pulmonary dysfunction. However, the combination of opioids with other medications, such as general anesthetics, tranquilizers, alcohol, or sedative-hypnotics, may present a greater risk of respiratory depression. Maximal respiratory depression occurs within 5 to 10 minutes after intravenous administration of morphine or within 30 or 90 minutes following intramuscular or subcutaneous administration, respectively. Maximal respiratory depressant effects occur more rapidly with more lipid-soluble agents. Following therapeutic doses, respiratory minute volume may be reduced for as long as 4 to 5 hours. The primary mechanism of respiratory depression by opioids involves a reduction in the responsiveness of the brainstem respiratory centers to carbon dioxide. Opioids also depress the pontine and medullary centers involved in regulating respiratory rhythmicity and the responsiveness of medullary respiratory centers to electrical stimulation (see Martin, 1983). Hypoxic stimulation of the chemoreceptors still may be effective when opioids have decreased the responsiveness to CO2, and the inhalation of O2 may thus produce apnea. After large doses of morphine or other agonists, patients will breathe if instructed to do so, but without such instruction

they may remain relatively apneic. Because of the accumulation of CO2, respiratory rate and sometimes even minute volume can be unreliable indicators of the degree of respiratory depression that has been produced by morphine. Natural sleep also produces a decrease in the sensitivity of the medullary center to CO2, and the effects of morphine and sleep are additive. Numerous studies have compared morphine and morphine-like opioids with respect to their ratios of analgesic to respiratory-depressant activities. Most studies have found that, when equianalgesic doses are used, the degree of respiratory depression observed with morphine-like opioids is not significantly different from that seen with morphine. Severe respiratory depression is less likely after the administration of large doses of selective agonists. High concentrations of opioid receptors and of endogenous peptides are found in the medullary areas believed to be important in ventilatory control. Cough Morphine and related opioids also depress the cough reflex, at least in part by a direct effect on a cough center in the medulla. There is, however, no obligatory relationship between depression of respiration and depression of coughing, and effective antitussive agents are available that do not depress respiration (see below). Suppression of cough by such agents appears to involve receptors in the medulla that are less sensitive to naloxone than are those responsible for analgesia. Nauseant and Emetic Effects Nausea and vomiting produced by morphine-like drugs are unpleasant side effects caused by direct stimulation of the chemoreceptor trigger zone for emesis, in the area postrema of the medulla. Certain individuals never vomit after morphine, whereas others do so each time the drug is administered. Nausea and vomiting are relatively uncommon in recumbent patients given therapeutic doses of morphine, but nausea occurs in approximately 40% and vomiting in 15% of ambulatory patients given 15 mg of the drug subcutaneously. This suggests that a vestibular component also is operative. Indeed, the nauseant and emetic effects of morphine are markedly enhanced by vestibular stimulation, and morphine and related synthetic analgesics produce an increase in vestibular sensitivity. All clinically useful agonists produce some degree of nausea and vomiting. Careful, controlled clinical studies usually demonstrate that, in equianalgesic dosage, the incidence of such side effects is not significantly lower than that seen with morphine. Drugs that are useful in motion sickness are sometimes helpful in reducing opioid-induced nausea in ambulatory patients; phenothiazines are also useful (see Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania). Cardiovascular System In the supine patient, therapeutic doses of morphine-like opioids have no major effect on blood pressure or cardiac rate and rhythm. Such doses do produce peripheral vasodilation, reduced peripheral resistance, and an inhibition of baroreceptor reflexes. Therefore, when supine patients assume the head-up position, orthostatic hypotension and fainting may occur. The peripheral arteriolar and venous dilation produced by morphine involves several mechanisms. Morphine and some other opioids provoke release of histamine, which sometimes plays a large role in the hypotension. However, vasodilation is usually only partially blocked by H1 antagonists, but it is

effectively reversed by naloxone. Morphine also blunts the reflex vasoconstriction caused by increased PCO2. Effects on the myocardium are not significant in normal individuals. In patients with coronary artery disease but no acute medical problems, 8 to 15 mg of morphine administered intravenously produces a decrease in oxygen consumption, left ventricular end-diastolic pressure, and cardiac work; effects on cardiac index are usually slight (Sethna et al., 1982). In patients with acute myocardial infarction, the cardiovascular responses to morphine may be more variable than in normal subjects, and the magnitude of changes (e.g., the decrease in blood pressure) may be more pronounced (see Roth et al., 1988). Morphine may exert its well-known therapeutic effect in the treatment of angina pectoris and acute myocardial infarction by decreasing preload, inotropy, and chronotropy, thus favorably altering determinants of myocardial oxygen consumption and helping to relieve ischemia. It is not clear whether the analgesic properties of morphine in this situation are due to the reversal of acidosis that may stimulate local acid-sensing ion channels (Benson et al., 1999; McCleskey and Gold, 1999) or to a direct analgesic effect on nociceptive afferents from the heart. When administered prior to experimental ischemia, morphine has been shown to produce cardioprotective effects. Morphine can mimic the phenomenon of ischemic preconditioning, where a short ischemic episode paradoxically protects the heart against further ischemia. This effect appears to be mediated through receptors signaling through a mitochondrial ATP-sensitive potassium channel in cardiac myocytes; the effect also is produced by other G protein-coupled receptors signaling through Gi subunits (Fryer et al., 2000; Liang and Gross, 1999; Schultz et al., 1996). It also has been suggested recently that opioids can be antiarrhythmic and antifibrillatory during and after periods of ischemia (Fryer et al., 2000). Other data, however, suggest that opioids can be arrythmogenic (McIntosh et al., 1992). Very large doses of morphine can be used to produce anesthesia; however, decreased peripheral resistance and blood pressure are troublesome. Fentanyl and sufentanil, which are potent and selective agonists, are less likely to cause hemodynamic instability during surgery, in part because they do not cause the release of histamine (Monk et al., 1988). Morphine-like opioids should be used with caution in patients who have a decreased blood volume, since these agents can aggravate hypovolemic shock. Morphine should be used with great care in patients with cor pulmonale, since deaths following ordinary therapeutic doses have been reported. The concurrent use of certain phenothiazines may increase the risk of morphine-induced hypotension. Cerebral circulation is not directly affected by therapeutic doses of morphine. However, opioidinduced respiratory depression and CO2 retention can result in cerebral vasodilation and an increase in cerebrospinal fluid pressure; the pressure increase does not occur when PCO2 is maintained at normal levels by artificial ventilation. Gastrointestinal Tract Stomach Morphine and other agonists usually decrease the secretion of hydrochloric acid, although stimulation is sometimes evident. Activation of opioid receptors on parietal cells enhances secretion, but indirect effects, including increased secretion of somatostatin from the pancreas and

reduced release of acetylcholine, appear to be dominant in most circumstances (see Kromer, 1988). Relatively low doses of morphine decrease gastric motility, thereby prolonging gastric emptying time; this can increase the likelihood of esophageal reflux (see Duthie and Nimmo, 1987). The tone of the antral portion of the stomach and of the first part of the duodenum is increased, which often makes therapeutic intubation of the duodenum more difficult. Passage of the gastric contents through the duodenum may be delayed by as much as 12 hours, and the absorption of orally administered drugs is retarded. Small Intestine Morphine diminishes biliary, pancreatic, and intestinal secretions (Dooley et al., 1988) and delays digestion of food in the small intestine. Resting tone is increased, and periodic spasms are observed. The amplitude of the nonpropulsive type of rhythmic, segmental contractions usually is enhanced, but propulsive contractions are markedly decreased. The upper part of the small intestine, particularly the duodenum, is affected more than the ileum. A period of relative atony may follow the hypertonicity. Water is absorbed more completely because of the delayed passage of bowel contents, and intestinal secretion is decreased; this increases the viscosity of the bowel contents. In the presence of intestinal hypersecretion that may be associated with diarrhea, morphine-like drugs inhibit the transfer of fluid and electrolytes into the lumen by naloxone-sensitive actions on the intestinal mucosa and within the CNS. Enterocytes may possess opioid receptors, but this hypothesis is controversial. However, it is clear that opioids exert important effects on the submucosal plexus that lead to a decrease in the basal secretion by enterocytes and inhibition of the stimulatory effects of acetylcholine, prostaglandin E2, and vasoactive intestinal peptide. The effects of opioids initiated either in the CNS or the submucosal plexus may be mediated in large part by the release of norepinephrine and stimulation of 2-adrenergic receptors on enterocytes (see Coupar, 1987). The actions of opioids on intestinal secretion have been reviewed by Manara and Bianchetti (1985) and Kromer (1988). Large Intestine Propulsive peristaltic waves in the colon are diminished or abolished after administration of morphine, and tone is increased to the point of spasm. The resulting delay in the passage of bowel contents causes considerable desiccation of the feces, which, in turn, retards their advance through the colon. The amplitude of the nonpropulsive type of rhythmic contractions of the colon usually is enhanced. The tone of the anal sphincter is greatly augmented, and reflex relaxation in response to rectal distension is reduced. These actions, combined with inattention to the normal sensory stimuli for defecation reflex due to the central actions of the drug, contribute to morphine-induced constipation. Mechanism of Action on the Bowel The usual gastrointestinal effects of morphine primarily are mediated by - and -opioid receptors in the bowel. However, injection of opioids into the cerebral ventricles or in the vicinity of the spinal cord can inhibit gastrointestinal propulsive activity as long as the extrinsic innervation to the bowel is intact. The relatively poor penetration of morphine into the CNS may explain how preparations such as paregoric can produce constipation at less than analgesic doses and may account for troublesome gastrointestinal side effects during the use of oral morphine for the treatment of cancer pain (see Manara and Bianchetti, 1985). Although some tolerance develops to the effects of opioids on gastrointestinal motility, patients who take opioids chronically remain

constipated. Biliary Tract After the subcutaneous injection of 10 mg of morphine sulfate, the sphincter of Oddi constricts and the pressure in the common bile duct may rise more than tenfold within 15 minutes; this effect may persist for 2 hours or more. Fluid pressure also may increase in the gallbladder and produce symptoms that may vary from epigastric distress to typical biliary colic. Some patients with biliary colic may experience exacerbation rather than relief of pain when given these drugs. Spasm of the sphincter of Oddi is probably responsible for elevations of plasma amylase and lipase that are sometimes found after patients are given morphine. Atropine only partially prevents morphine-induced biliary spasm, but opioid antagonists prevent or relieve it. Nitroglycerin (0.6 to 1.2 mg) administered sublingually also decreases the elevated intrabiliary pressure (see Staritz, 1988). Other Smooth Muscle Ureter and Urinary Bladder Therapeutic doses of morphine may increase the tone and amplitude of contractions of the ureter, although the response is variable. When the antidiuretic effects of the drug are prominent and urine flow decreases, the ureter may become quiescent. Morphine inhibits the urinary voiding reflex, and both the tone of the external sphincter and the volume of the bladder are increased; catheterization is sometimes required following therapeutic doses of morphine. Stimulation of either or receptors in the brain or in the spinal cord exerts similar actions on bladder motility (see Dray and Nunan, 1987). Tolerance develops to these effects of opioids on the bladder. Uterus If the uterus has been made hyperactive by oxytocics, morphine tends to restore tone, frequency, and the amplitude of contractions to normal. Parenteral administration of opioids within 2 to 4 hours of delivery may lead to transient respiratory depression in the neonate due to transplacental passage of opioids. This may be treated readily with naloxone. Skin Therapeutic doses of morphine cause dilation of cutaneous blood vessels. The skin of the face, neck, and upper thorax frequently becomes flushed. These changes may be due in part to the release of histamine and may be responsible for the sweating and some of the pruritus that occasionally follow the systemic administration of morphine (see below). Histamine release probably accounts for the urticaria commonly seen at the site of injection; this is not mediated by opioid receptors and is not blocked by naloxone. It is seen with morphine and meperidine, but not with oxymorphone, methadone, fentanyl, or sufentanil (see Duthie and Nimmo, 1987). Pruritus is a common and potentially disabling complication of opioid use. It can be caused by intraspinal and systemic injections of opioids, but it appears to be more intense after intraspinal administration (Ballantyne et al., 1988). The effect appears to be mediated in large part by dorsal horn neurons and is reversible by naloxone (Thomas et al., 1992). An intriguing report suggested

that systemic morphine could partially inhibit pruritus caused by intraspinal administration of morphine, implying the existence of an opioid-mediated, itch-inhibition system, possibly supraspinal in origin (Thomas et al., 1993). Immune System The effects of opioids on the immune system are complex. Opioids have been shown to modulate immune function by direct effects on cells of the immune system and indirectly via centrally mediated neuronal mechanisms (Sharp and Yaksh, 1997). It appears that acute, central immunomodulatory effects of opioids may be mediated by activation of the sympathetic nervous system, whereas the chronic effects of opioids may involve modulation of hypothalamic-pituitaryadrenal (HPA) axis function (Mellon and Bayer, 1998). Direct effects on immune cells may involve unique and as yet incompletely characterized variants of the classical neuronal opioid receptors, with -receptor variants being more prominent (Sharp and Yaksh, 1997). Atypical receptors could account for the fact that it has been very difficult to demonstrate significant opioid binding on immune cells in spite of the observance of robust functional effects. In contrast, morphine-induced immune suppression is largely abolished in mice lacking the -receptor gene, suggesting that the receptor is a major target of morphine's actions on the immune system (Gaveriaux-Ruff et al., 1998). A potential mechanism for the immune suppressive effects of morphine on neutrophils was proposed recently by Welters et al. (2000), who demonstrated that NF- B activation induced by an inflammatory stimulus was inhibited by morphine in a nitric oxide-dependent manner. Another group of investigators has proposed that the induction and activation of MAP kinase also may play a role (Chuang et al., 1997). The overall effects of opioids on immune function appear to be suppressive; increased susceptibility to infection and tumor spread have been observed in experimental settings. Infusion of the receptor antagonist naloxone has been shown to improve survival after experimentally induced sepsis (Risdahl et al., 1998). Such effects have been inconsistent in clinical situations, possibly because of the use of confounding therapies and necessary opioid analgesics. In some situations, effects on immune function appear more prominent with acute administration than with chronic administration, which could have important implications for the care of the critically ill (Sharp and Yaksh, 1997). In contrast, opioids have been shown to reverse pain-induced immunosuppression and increased tumor metastatic potential in animal models (Page and Ben-Eliyahu, 1997). Therefore, opioids may either inhibit or augment immune function depending on the context in which they are used. These studies also indicate that withholding opioids in the presence of pain in immunocompromised patients could actually worsen immune function. An intriguing recent paper indicated that the partial -receptor agonist buprenorphine (see below) did not alter immune function when injected centrally into the mesencephalic periaqueductal gray matter, while morphine did (Gomez-Flores and Weber, 2000). Taken together, these studies indicate that opioid-induced immune suppression may be clinically relevant to both the treatment of severe pain and in the susceptibility of opioid addicts to infection (e.g., HIV, tuberculosis). Different opioid agonists also may have unique immunomodulatory properties. Better understanding of these properties eventually should help guide rational use of opioids in patients with cancer or at risk for infection or immune compromise. Tolerance and Physical Dependence The development of tolerance and physical dependence with repeated use is a characteristic feature of all the opioid drugs. Tolerance to the effect of opioids or other drugs simply means that, over time, the drug loses its effectiveness and an increased dose is required to produce the same physiological response. Dependence refers to a complex and poorly understood set of changes in

the homeostasis of an organism that cause a disturbance of the homeostatic set point of the organism if the drug is stopped. This disturbance often is revealed when administration of an opioid is abruptly stopped, resulting in withdrawal. Addiction is a behavioral pattern characterized by compulsive use of a drug and overwhelming involvement with its procurement and use. Tolerance and dependence are physiological responses seen in all patients and are not predictors of addiction (see Chapter 24: Drug Addiction and Drug Abuse). These processes appear to be quite distinct. For example, cancer pain often requires prolonged treatment with high doses of opioids, leading to tolerance and dependence. Yet, abuse in this setting is very unusual (Foley, 1993). Neither the presence of tolerance and dependence nor the fear that they may develop should ever interfere with the appropriate use of opioids. Opioids can be discontinued in dependent patients once the need for analgesics is gone without subjecting them to withdrawal (see Chapter 24: Drug Addiction and Drug Abuse). Clinically, the dose can be decreased by 10% to 20% every other day and eventually stopped without signs and symptoms of withdrawal. In vivo studies in animal models demonstrate the importance of other neurotransmitters and their interactions with opioid pathways in the development of tolerance to morphine. Blockade of glutamate actions by NMDA (N-methyl-D-aspartate)-receptor antagonists blocks morphine tolerance (Trujillo and Akil, 1997). Since NMDA antagonists have no effect on the potency of morphine in naive animals, their effect cannot be attributed to potentiation of opioid actions. Interestingly, the clinically used antitussive dextromethorphan (see Dextromethorphan) has been shown to function as an NMDA antagonist. In animals, it can attenuate opioid tolerance development and reverse established tolerance (Elliott et al., 1994). Nitric oxide production, possibly induced by NMDA-receptor activation, also has been implicated in tolerance, as inhibition of nitric oxide synthase (NOS) also blocks morphine tolerance development (Kolesnikov et al., 1993). Administering NOS inhibitors to morphine-tolerant animals also may reverse tolerance in certain circumstances. Although the NMDA antagonists and nitric oxide synthase inhibitors are effective against tolerance to morphine and agonists such as DPDPE, they have little effect against tolerance to the agonists. Dependence seems to be closely related to tolerance, since the same treatments that block tolerance to morphine also block dependence. Other related signaling systems also are being investigated as mediators of opioid tolerance and dependence. The selective actions of drugs on tolerance and dependence demonstrate that specific mechanisms can be targeted to minimize these two unwanted actions. Morphine and Related Opioid Agonists There are now many compounds with pharmacological properties similar to those of morphine, yet morphine remains the standard against which new analgesics are measured. However, responses of an individual patient may vary dramatically with different -opioid receptor agonists. For example, some patients unable to tolerate morphine may have no problems with an equianalgesic dose of methadone, whereas others can take morphine and not methadone. If problems are encountered with one drug, another should be tried. Mechanisms underlying variations in individual responses to morphine-like agonists are not yet well understood. Source and Composition of Opium Because the laboratory synthesis of morphine is difficult, the drug is still obtained from opium or extracted from poppy straw. Opium is obtained from the unripe seed capsules of the poppy plant, Papaver somniferum. The milky juice is dried and powdered to make powdered opium, which contains a number of alkaloids. Only a fewmorphine, codeine, and papaverinehave clinical usefulness. These alkaloids can be divided into two distinct chemical classes, phenanthrenes and benzylisoquinolines. The principal phenanthrenes are morphine (10% of opium), codeine (0.5%),

and thebaine (0.2%). The principal benzylisoquinolines are papaverine (1.0%), which is a smooth muscle relaxant (see the seventh and earlier editions of this book), and noscapine (6.0%). Chemistry of Morphine and Related Opioids The structure of morphine is shown in Table 235. Many semisynthetic derivatives are made by relatively simple modifications of morphine or thebaine. Codeine is methylmorphine, the methyl substitution being on the phenolic hydroxyl group. Thebaine differs from morphine only in that both hydroxyl groups are methylated and that the ring has two double bonds ( 6,7 , 8,14). Thebaine has little analgesic action but is a precursor of several important 14-OH compounds, such as oxycodone and naloxone. Certain derivatives of thebaine are more than 1000 times as potent as morphine (e.g., etorphine). Diacetylmorphine, or heroin, is made from morphine by acetylation at the 3 and 6 positions. Apomorphine, which also can be prepared from morphine, is a potent emetic and dopaminergic agonist. Hydromorphone, oxymorphone, hydrocodone, and oxycodone also are made by modifying the morphine molecule. The structural relationships between morphine and some of its surrogates and antagonists are shown in Table 235. Structure-Activity Relationship of the Morphine-Like Opioids In addition to morphine, codeine, and the semisynthetic derivatives of the natural opium alkaloids, a number of other structurally distinct chemical classes of drugs have pharmacological actions similar to those of morphine. Clinically useful compounds include the morphinans, benzomorphans, methadones, phenylpiperidines, and propionanilides. Although the two-dimensional representations of these chemically diverse compounds appear to be quite different, molecular models show certain common characteristics; these are indicated by the heavy lines in the structure of morphine shown in Table 235. Among the important properties of the opioids that can be altered by structural modification are their affinities for various species of opioid receptors, their activities as agonists versus antagonists, their lipid solubilities, and their resistance to metabolic breakdown. For example, blockade of the phenolic hydroxyl at position 3, as in codeine and heroin, drastically reduces binding to receptors; these compounds are converted to the potent analgesics morphine and 6-acetyl morphine, respectively, in vivo. Absorption, Distribution, Fate, and Excretion Absorption In general, the opioids are readily absorbed from the gastrointestinal tract; absorption through the rectal mucosa is adequate, and a few agents (e.g., morphine, hydromorphone) are available in suppositories. The more lipophilic opioids also are readily absorbed through the nasal or buccal mucosa (Weinberg et al., 1988). Those with the greatest lipid solubility also can be absorbed transdermally (Portenoy et al., 1993). Opioids are absorbed readily after subcutaneous or intramuscular injection and can adequately penetrate the spinal cord following epidural or intrathecal administration (also see section on Alternative Routes of Administration). Small amounts of morphine introduced epidurally or intrathecally into the spinal canal can produce profound analgesia that may last 12 to 24 hours. However, due to the hydrophilic nature of morphine, there is rostral spread of the drug in spinal fluid, and side effects, especially respiratory depression, can emerge up to 24 hours later as the opioid reaches supraspinal respiratory control centers. With highly lipophilic agents such as hydromorphone or fentanyl, rapid absorption by spinal neural tissues produces very localized effects and segmental analgesia. The duration of action is shorter because of distribution of the drug in the systemic circulation, and the severity of respiratory depression may be more directly proportional to its concentration in plasma, due to a

lesser degree of rostral spread (Gustafsson and Wiesenfeld-Hallin, 1988). However, patients receiving epidural or intrathecal fentanyl still should be monitored for respiratory depression. With most opioids, including morphine, the effect of a given dose is less after oral than after parenteral administration, due to variable but significant first-pass metabolism in the liver. For example, the bioavailability of oral preparations of morphine is only about 25%. The shape of the time-effect curve also varies with the route of administration, so that the duration of action is often somewhat longer with the oral route. If adjustment is made for variability of first-pass metabolism and clearance, it is possible to achieve adequate relief of pain by the oral administration of morphine. Satisfactory analgesia in cancer patients has been associated with a very broad range of steady-state concentrations of morphine in plasma (16 to 364 ng/ml; Neumann et al., 1982). When morphine and most opioids are given intravenously, they act promptly. However, the more lipid-soluble compounds act more rapidly than morphine after subcutaneous administration because of differences in the rates of absorption and entry into the CNS. Compared with other more lipidsoluble opioids such as codeine, heroin, and methadone, morphine crosses the bloodbrain barrier at a considerably lower rate. Distribution and Fate When therapeutic concentrations of morphine are present in plasma, about one-third of the drug is protein bound. Morphine itself does not persist in tissues, and 24 hours after the last dose tissue concentrations are low. The major pathway for the metabolism of morphine is conjugation with glucuronic acid. The two major metabolites formed are morphine-6-glucuronide and morphine-3-glucuronide. Small amounts of morphine 3,6, diglucuronide also may be formed. Although the 3- and 6-glucuronides are quite polar, both can cross the bloodbrain barrier to exert significant clinical effects (Christup, 1997). Morphine-6-glucuronide has pharmacological actions indistinguishable from those of morphine. Morphine-6-glucuronide given systemically is approximately twice as potent as morphine in animal models (Paul et al., 1989) and in human beings (Osborne et al., 1988). With chronic administration, it accounts for a significant portion of morphine's analgesic actions (Osborne et al., 1988; Osborne et al., 1990; Portenoy et al., 1991; Portenoy et al., 1992). Indeed, with chronic oral dosing, the blood levels of morphine-6-glucuronide typically exceed those of morphine. Given its greater potency as well as its higher concentrations, morphine-6-glucuronide may be responsible for most of morphine's analgesic activity in patients receiving chronic oral morphine. Morphine-6glucuronide is excreted by the kidney. In renal failure, the levels of morphine-6-glucuronide can accumulate, perhaps explaining morphine's potency and long duration in patients with compromised renal function. In young adults, the half-life of morphine is about 2 hours; the half-life of morphine6-glucuronide is somewhat longer. Children achieve adult renal function values by 6 months of age. In elderly patients, lower morphine doses are recommended, based on its smaller volume of distribution (Owen et al., 1983) and the general decline in renal function in the elderly. The 3glucuronide, also an important metabolite of morphine (Milne et al., 1996), has little affinity for opioid receptors but may contribute to excitatory effects of morphine (Smith, 2000). Some investigators also have shown that morphine-3-glucuronide can antagonize morphine-induced analgesia (Smith et al., 1990), but this finding is not universal (Christup, 1997). Morphine also is metabolized by other pathways. N-demethylation to normorphine is a minor metabolic pathway in human beings but is more prominent in rodents (Yeh et al., 1977). N-dealkylation is important in the metabolism of some congeners of morphine.

Excretion Very little morphine is excreted unchanged. It is eliminated by glomerular filtration, primarily as morphine-3-glucuronide; 90% of the total excretion takes place during the first day. Enterohepatic circulation of morphine and its glucuronides occurs, which accounts for the presence of small amounts of morphine in the feces and in the urine for several days after the last dose. Codeine In contrast to morphine, codeine is approximately 60% as effective orally as parenterally, both as an analgesic and as a respiratory depressant. Codeine, like levorphanol, oxycodone, and methadone, has a high oral to parenteral potency ratio. The greater oral efficacy of these drugs is due to less first-pass metabolism in the liver. Once absorbed, codeine is metabolized by the liver, and its metabolites are excreted chiefly in the urine, largely in inactive forms. A small fraction (approximately 10%) of administered codeine is O-demethylated to form morphine, and both free and conjugated morphine can be found in the urine after therapeutic doses of codeine. Codeine has an exceptionally low affinity for opioid receptors, and the analgesic effect of codeine is due to its conversion to morphine. However, its antitussive actions may involve distinct receptors that bind codeine itself. The half-life of codeine in plasma is 2 to 4 hours. The conversion of codeine to morphine is effected by the cytochrome P450 enzyme CYP2D6. Well-characterized genetic polymorphisms in CYP2D6 lead to the inability to convert codeine to morphine, thus making codeine ineffective as an analgesic for about 10% of the Caucasian population (Eichelbaum and Evert, 1996). Other polymorphisms can lead to enhanced metabolism and thus increased sensitivity to codeine's effects (Eichelbaum and Evert, 1996). Interestingly, there appears to be variation in metabolic efficiency among different ethnic groups. For example, Chinese produce less morphine from codeine than do Caucasians and also are less sensitive to morphine's effects than are Caucasians (Caraco et al., 1999). The reduced sensitivity to morphine may be due to decreased production of morphine-6-glucuronide (Caraco et al., 1999). Thus, it is important to consider the possibility of metabolic enzyme polymorphism in any patient who does not receive adequate analgesia from codeine or an adequate response to other administered prodrugs. Tramadol Tramadol (ULTRAM) is a synthetic codeine analog that is a weak -opioid receptor agonist. Part of its analgesic effects are produced by inhibition of uptake of norepinephrine and serotonin. Tramadol appears to be as effective as other weak opioids. In the treatment of mild to moderate pain, tramadol is as effective as morphine or meperidine. However, for the treatment of severe or chronic pain, tramadol is less effective. Tramadol is as effective as meperidine in the treatment of labor pain and may cause less neonatal respiratory depression. Tramadol is 68% bioavailable after a single oral dose and 100% available when administered intramuscularly. Its affinity for the opioid receptor is only 1/6000 that of morphine. However, the primary O-demethylated metabolite of tramadol is 2- to 4-times as potent as the parent drug and may account for part of the analgesic effect. Tramadol is supplied as a racemic mixture, which is more effective than either enantiomer alone. The (+) enantiomer binds to the receptor and inhibits serotonin uptake. The () enantiomer inhibits norepinephrine uptake and stimulates 2-adrenergic receptors (Lewis and Han, 1997). The compound undergoes hepatic metabolism and renal excretion, with an elimination half-life of 6 hours for tramadol and 7.5 hours for its active metabolite. Analgesia begins within an hour of oral dosing, and the effect peaks within 2 to 3 hours.

The duration of analgesia is about 6 hours. The maximum recommended daily dose is 400 mg. Common side effects of tramadol include nausea, vomiting, dizziness, dry mouth, sedation, and headache. Respiratory depression appears to be less than with equianalgesic doses of morphine, and the degree of constipation is less than that seen after equivalent doses of codeine (Duthie, 1998). Tramadol can cause seizures and possibly exacerbate seizures in patients with predisposing factors. While tramadol-induced analgesia is not entirely reversible by naloxone, tramadol-induced respiratory depression can be reversed by naloxone. However, the use of naloxone increases the risk of seizure. Physical dependence on and abuse of tramadol have been reported. Although its abuse potential is unclear, tramadol probably should be avoided in patients with a history of addiction. Because of its inhibitory effect on serotonin uptake, tramadol should not be used in patients taking monoamine oxidase (MAO) inhibitors (Lewis and Han, 1997; see also section on Interactions with Other Drugs, below). Heroin Heroin (diacetylmorphine) is rapidly hydrolyzed to 6-monoacetylmorphine (6-MAM), which, in turn is hydrolyzed to morphine. Both heroin and 6-MAM are more lipid soluble than morphine and enter the brain more readily. Current evidence suggests that morphine and 6-MAM are responsible for the pharmacological actions of heroin. Heroin is mainly excreted in the urine, largely as free and conjugated morphine. The absorption, fate, and distribution of heroin and other morphine-like drugs have been reviewed by (Misra, 1978) and by (Chan and Matzke, 1987). Untoward Effects and Precautions Morphine and related opioids produce a wide spectrum of unwanted effects, including respiratory depression, nausea, vomiting, dizziness, mental clouding, dysphoria, pruritus, constipation, increased pressure in the biliary tract, urinary retention, and hypotension. The bases of these effects have been described above. Rarely, a patient may develop delirium. Increased sensitivity to pain after the analgesia has worn off also may occur. A number of factors may alter a patient's sensitivity to opioid analgesics, including the integrity of the bloodbrain barrier. For example, when morphine is administered to a newborn infant in weight-appropriate doses extrapolated from adults, unexpectedly profound analgesia and respiratory depression may be observed. This is due to the immaturity of the bloodbrain barrier in neonates (Way et al., 1965). As mentioned previously, morphine is hydrophilic, so in the normal situation, proportionately less morphine crosses into the CNS than with more lipophilic opioids. In neonates and in other situations with a compromised bloodbrain barrier, lipophilic opioids may give more predictable clinical results than morphine. In adults, the duration of the analgesia produced by morphine increases progressively with age; however, the degree of analgesia that is obtained with a given dose changes little. Changes in pharmacokinetic parameters only partially explain these observations. The patient with severe pain may tolerate larger doses of morphine. However, as the pain subsides, the patient may exhibit sedation and even respiratory depression as the stimulatory effects of pain are diminished. The reasons for this effect are unclear. All the opioid analgesics are metabolized by the liver, and the drugs should be used with caution in patients with hepatic disease, since increased bioavailability after oral administration or cumulative effects may occur (see Sawe et al., 1981). Renal disease also significantly alters the pharmacokinetics of morphine, codeine, drocode (dihydrocodeine), meperidine, and propoxyphene.

Although single doses of morphine are well tolerated, the active metabolite, morphine-6glucuronide, may accumulate with continued dosing, and symptoms of opioid overdose may result (see Chan and Matzke, 1987). This metabolite also may accumulate during repeated administration of codeine to patients with impaired renal function. When repeated doses of meperidine are given to such patients, the accumulation of normeperidine may cause tremor and seizures (Kaiko et al., 1983). Similarly, the repeated administration of propoxyphene may lead to naloxone-insensitive cardiac toxicity caused by the accumulation of norpropoxyphene (see Chan and Matzke, 1987). Morphine and related opioids must be used cautiously in patients with compromised respiratory function, such as those with emphysema, kyphoscoliosis, or severe obesity. In patients with chronic cor pulmonale, death has occurred following therapeutic doses of morphine. Although many patients with such conditions seem to be functioning within normal limits, they are already utilizing compensatory mechanisms, such as increased respiratory rate. Many have chronically elevated levels of plasma CO2 and may be less sensitive to the stimulating actions of CO2. The further imposition of the depressant effects of opioids can be disastrous. The respiratory-depressant effects of opioids and the related capacity to elevate intracranial pressure must be considered in the presence of head injury or of an already elevated intracranial pressure. While head injury per se does not constitute an absolute contraindication to the use of opioids, the possibility of exaggerated depression of respiration and the potential need to control ventilation of the patient must be considered. Finally, since opioids may produce mental clouding and side effects such as miosis and vomiting, which are important signs in following the clinical course of patients with head injuries, the advisability of their use must be weighed carefully against these risks. Morphine causes histamine release, which can cause bronchoconstriction and vasodilation. Morphine has the potential to precipitate or exacerbate asthmatic attacks. The use of morphine should be avoided in patients with a history of asthma. Other -receptor agonists that do not release histamine, such as the fentanyl derivatives, may be better choices for such patients. Patients with reduced blood volume are considerably more susceptible to the vasodilatory effects of morphine and related drugs, and these agents must be used cautiously in patients with hypotension from any cause. Allergic phenomena occur with opioid analgesics, but they are not common. They usually are manifested as urticaria and other types of skin rashes such as fixed eruptions; contact dermatitis in nurses and pharmaceutical workers also occurs. Wheals at the site of injection of morphine, codeine, and related drugs are probably secondary to the release of histamine. Anaphylactoid reactions have been reported after intravenous administration of codeine and morphine, but such reactions are rare. It has been suggested, but not proven, that such reactions are responsible for some of the sudden deaths, episodes of pulmonary edema, and other complications that occur among addicts who use heroin intravenously (see Chapter 24: Drug Addiction and Drug Abuse). Interactions with Other Drugs The depressant effects of some opioids may be exaggerated and prolonged by phenothiazines, monoamine oxidase inhibitors, and tricyclic antidepressants; the mechanisms of these supraadditive effects are not fully understood but may involve alterations in the rate of metabolic transformation of the opioid or alterations in neurotransmitters involved in the actions of opioids. Some, but not all, phenothiazines reduce the amount of opioid required to produce a given level of analgesia. However, depending on the specific agent, the respiratory-depressant effects also seem to be enhanced, the degree of sedation is increased, and the hypotensive effects of phenothiazines become an additional complication. Some phenothiazine derivatives enhance the sedative effects, but at the

same time seem to be antianalgesic and increase the amount of opioid required to produce satisfactory relief from pain. Small doses of amphetamine substantially increase the analgesic and euphoriant effects of morphine and may decrease its sedative side effects. A number of antihistamines exhibit modest analgesic actions; some (e.g., hydroxyzine) enhance the analgesic effects of low doses of opioids (Rumore and Schlichting, 1986). Antidepressants such as desipramine and amitriptyline are used in the treatment of chronic neuropathic pain but have limited intrinsic analgesic actions in acute pain. However, antidepressants may enhance morphine-induced analgesia (Levine et al., 1986; Pick et al., 1992b). The analgesic synergism between opioids and aspirin-like drugs is discussed below and in Chapter 27: Analgesic-Antipyretic and Antiinflammatory Agents and Drugs Employed in the Treatment of Gout. Other -Receptor Agonists Levorphanol and Congeners Levorphanol (LEVO-DROMORAN) is the only commercially available opioid agonist of the morphinan series. The d-isomer (dextrorphan) is relatively devoid of analgesic action but may have inhibitory effects at NMDA receptors. The structure of levorphanol is shown in Table 235. The pharmacological effects of levorphanol closely parallel those of morphine. However, clinical reports suggest that it may produce less nausea and vomiting. Although levorphanol is less effective when given orally, its oralparenteral potency ratio is comparable to that of codeine and oxycodone. The average adult dose (2 mg subcutaneously) produces analgesia for a period of time somewhat longer than that for morphine. Levorphanol is metabolized less rapidly and has a half-life of about 12 to 16 hours; repeated administration at short intervals may thus lead to accumulation of the drug in plasma (Foley, 1985). Meperidine and Congeners The structural formulas of meperidine, a phenylpiperidine, and some of its congeners are shown in Figure 234. Meperidine is predominantly a -receptor agonist, and it exerts its chief pharmacological action on the CNS and the neural elements in the bowel. The use of meperidine has diminished in recent years due to concerns over metabolite toxicity. For this reason, meperidine is no longer recommended for the treatment of chronic pain and should not be used for longer than 48 hours or in doses greater than 600 mg/24 hrs (Agency for Health Care Policy and Research, 1992). Figure 234. Chemical Structures of Piperidine and Phenylpiperidine Analgesics.

Pharmacological Properties Central Nervous System

Meperidine produces a pattern of effects similar but not identical to that described for morphine. Analgesia The analgesic effects of meperidine are detectable about 15 minutes after oral administration, reach a peak in about 1 to 2 hours, and subside gradually. The onset of analgesic effect is faster (within 10 minutes) after subcutaneous or intramuscular administration, and the effect reaches a peak in about 1 hour that corresponds closely to peak concentrations in plasma. In clinical use, the duration of effective analgesia is approximately 1.5 to 3 hours. In general, 75 to 100 mg of meperidine hydrochloride (pethidine, DEMEROL) given parenterally is approximately equivalent to 10 mg of morphine, and, in equianalgesic doses, meperidine produces as much sedation, respiratory depression, and euphoria as does morphine. In terms of total analgesic effect, meperidine is about one-third as effective when given by mouth as when administered parenterally. A few patients may experience dysphoria. Other CNS Actions Peak respiratory depression is observed within 1 hour after intramuscular administration, and there is a return toward normal starting at about 2 hours. Like other opioids, meperidine causes pupillary constriction, increases the sensitivity of the labyrinthine apparatus, and has effects on the secretion of pituitary hormones similar to those of morphine. Meperidine sometimes causes CNS excitation, characterized by tremors, muscle twitches, and seizures; these effects are due largely to accumulation of a metabolite, normeperidine (see below). As with morphine, respiratory depression is responsible for an accumulation of CO2, which, in turn, leads to cerebrovascular dilation, increase in cerebral blood flow, and elevation of cerebrospinal fluid pressure. Cardiovascular System The effects of meperidine on the cardiovascular system generally resemble those of morphine, including the ability to release histamine upon parenteral administration (Lee et al., 1976). Intramuscular administration of meperidine does not significantly affect heart rate, but intravenous administration frequently produces a marked increase in heart rate. Smooth Muscle Meperidine has effects on certain smooth muscles qualitatively similar to those observed with other opioids. Meperidine does not cause as much constipation as does morphine even when given over prolonged periods of time; this may be related to its greater ability to enter the CNS, thereby producing analgesia at lower systemic concentrations. As with other opioids, clinical doses of meperidine slow gastric emptying sufficiently to delay absorption of other drugs significantly. The uterus of a nonpregnant woman usually is mildly stimulated by meperidine. Administered prior to an oxytocic, meperidine does not exert any antagonistic effect. Therapeutic doses given during active labor do not delay the birth process; in fact, the frequency, duration, and amplitude of uterine contraction sometimes may be increased (Zimmer et al., 1988). The drug does not interfere with normal postpartum contraction or involution of the uterus, and it does not increase the incidence of postpartum hemorrhage. Absorption, Fate, and Excretion Meperidine is absorbed by all routes of administration, but the rate of absorption may be erratic

after intramuscular injection. The peak plasma concentration usually occurs at about 45 minutes, but the range is wide. After oral administration, only about 50% of the drug escapes first-pass metabolism to enter the circulation, and peak concentrations in plasma are usually observed in 1 to 2 hours (Herman et al., 1985). Meperidine is metabolized chiefly in the liver, with a half-life of about 3 hours. In patients with cirrhosis, the bioavailability of meperidine is increased to as much as 80%, and the half-lives of both meperidine and normeperidine are prolonged. Approximately 60% of meperidine in plasma is protein bound. In human beings, meperidine is hydrolyzed to meperidinic acid, which, in turn, is partially conjugated. Meperidine also is N-demethylated to normeperidine, which may then be hydrolyzed to normeperidinic acid and subsequently conjugated. The clinical significance of the formation of normeperidine is discussed further below. Only a small amount of meperidine is excreted unchanged. Untoward Effects, Precautions, and Contraindications The pattern and overall incidence of untoward effects that follow the use of meperidine are similar to those observed after equianalgesic doses of morphine, except that constipation and urinary retention may be less common. Patients who experience nausea and vomiting with morphine may not do so with meperidine; the converse also may be true. As with other opioids, tolerance develops to some of these effects. The contraindications are generally the same as for other opioids. In patients or addicts who are tolerant to the depressant effects of meperidine, large doses repeated at short intervals may produce an excitatory syndrome including hallucinations, tremors, muscle twitches, dilated pupils, hyperactive reflexes, and convulsions. These excitatory symptoms are due to the accumulation of normeperidine, which has a half-life of 15 to 20 hours compared with 3 hours for meperidine. Opioid antagonists can block the convulsant effect of normeperidine in the mouse. Since normeperidine is eliminated by both the kidney and the liver, decreased renal or hepatic function increases the likelihood of such toxicity (Kaiko et al., 1983). Thus, meperidine is not the drug of choice for the treatment of severe or prolonged pain because of its shorter duration of action relative to morphine and the potential for CNS toxicity from normeperidine. Interaction with Other Drugs Severe reactions may follow the administration of meperidine to patients being treated with MAO inhibitors. Two basic types of interactions can be observed. The most prominent is an excitatory reaction with delirium, hyperthermia, headache, hyper- or hypotension, rigidity, convulsions, coma, and death. This reaction may be due to the ability of meperidine to block neuronal reuptake of serotonin and the resultant serotonergic overactivity (Stack et al., 1988). Therefore, meperidine and its congeners should not be used in patients taking MAO inhibitors. Dextromethorphan also inhibits neuronal serotonin uptake and should be avoided in these patients. As discussed above, tramadol inhibits uptake of norepinephrine and serotonin and should not be used concomitantly with MAO inhibitors. Similar interactions with other currently used opioids have not been observed clinically. Another type of interaction, a potentiation of opioid effect due to inhibition of hepatic microsomal enzymes, also can be observed in patients taking MAO inhibitors, necessitating a reduction in the doses of opioids. Chlorpromazine increases the respiratory-depressant effects of meperidine, as do tricyclic antidepressants; this is not true of diazepam. Concurrent administration of drugs such as promethazine or chlorpromazine also may greatly enhance meperidine-induced sedation without

slowing clearance of the drug. Treatment with phenobarbital or phenytoin increases systemic clearance and decreases oral bioavailability of meperidine; this is associated with an elevation of the concentration of normeperidine in plasma (Edwards et al., 1982). As with morphine, concomitant administration of amphetamine has been reported to enhance the analgesic effects of meperidine and its congeners while counteracting sedation. Therapeutic Uses The major use of meperidine is for analgesia. Unlike morphine and its congeners, meperidine is not used for the treatment of cough or diarrhea. Single doses of meperidine also appear to be effective in the treatment of postanesthetic shivering. Meperidine crosses the placental barrier and even in reasonable analgesic doses causes a significant increase in the percentage of babies who show delayed respiration, decreased respiratory minute volume, or decreased oxygen saturation, or who require resuscitation. Both fetal and maternal respiratory depression induced by meperidine can be treated with naloxone. The fraction of drug that is bound to protein is lower in the fetus; concentrations of free drug thus may be considerably higher than in the mother. Nevertheless, meperidine produces less respiratory depression in the newborn than does an equianalgesic dose of morphine or methadone (Fishburne, 1982). Congeners of Meperidine Diphenoxylate Diphenoxylate is a meperidine congener that has a definite constipating effect in human beings. Its only approved use is in the treatment of diarrhea. Although single doses in the therapeutic range (see below) produce little or no morphine-like subjective effects, at high doses (40 to 60 mg) the drug shows typical opioid activity, including euphoria, suppression of morphine abstinence, and a morphine-like physical dependence after chronic administration. Diphenoxylate is unusual in that even its salts are virtually insoluble in aqueous solution, thus obviating the possibility of abuse by the parenteral route. Diphenoxylate hydrochloride is available only in combination with atropine sulfate (LOMOTIL, others). The recommended daily dosage of diphenoxylate for treatment of diarrhea in adults is 20 mg, in divided doses. Difenoxin (difenoxylic acid; MOTOFEN ) is one of the metabolites of diphenoxylate; it has actions similar to those of the parent compound. Loperamide Loperamide (IMODIUM, others), like diphenoxylate, is a piperidine derivative (see Figure 233). It slows gastrointestinal motility by effects on the circular and longitudinal muscles of the intestine, presumably as a result of its interactions with opioid receptors in the intestine. Some part of its antidiarrheal effect may be due to a reduction of gastrointestinal secretion (see above; see also Manara and Bianchetti, 1985; Coupar, 1987; Kromer, 1988). In controlling chronic diarrhea, loperamide is as effective as diphenoxylate. In clinical studies, the most common side effect is abdominal cramps. Little tolerance develops to its constipating effect. In human volunteers taking large doses of loperamide, concentrations of drug in plasma peak about 4 hours after ingestion; this long latency may be due to inhibition of gastrointestinal motility and to enterohepatic circulation of the drug. The apparent elimination half-life is 7 to 14 hours. Loperamide is not well absorbed after oral administration and, in addition, apparently does not penetrate well into the brain because of exclusion by a P-glycoprotein transporter widely expressed in the bloodbrain barrier (Sadeque et al., 2000). Mice with deletions of one of the genes encoding

the P-glycoprotein transporter have much higher brain levels and significant central effects after administration of loperamide (Schinkel et al., 1996). Inhibition of P-glycoprotein by many clinically used drugs, such as quinidine, verapamil, and ketoconazole, possibly could lead to enhanced central effects of loperamide. In general, loperamide is unlikely to be abused parenterally because of its low solubility; large doses of loperamide given to human volunteers do not elicit pleasurable effects typical of opioids. The usual dosage is 4 to 8 mg per day; the daily dose should not exceed 16 mg. Fentanyl and Congeners Fentanyl is a synthetic opioid related to the phenylpiperidines (see Figure 233). It is a -receptor agonist and is about 100-times more potent than morphine as an analgesic. The actions of fentanyl and its congeners, sufentanil, alfentanil, and remifentanil, are similar to those of other -receptor agonists. Fentanyl is a popular drug in anesthetic practice because of its shorter time to peak analgesic effect, rapid termination of effect after small bolus doses, and relative cardiovascular stability (see Chapter 14: General Anesthetics). Pharmacological Properties Analgesia The analgesic effects of fentanyl and sufentanil are similar to those of morphine and other opioids. Fentanyl is approximately 100-times more potent than morphine, and sufentanil is approximately 1000-times more potent than morphine. These drugs are most commonly administered intravenously, although both also are commonly administered epidurally and intrathecally for acute postoperative and chronic pain management. Fentanyl and sufentanil are far more lipid soluble than morphine; thus the risk of delayed respiratory depression due to rostral spread of intraspinally administered narcotic to respiratory centers is greatly reduced. The time to peak analgesic effect after intravenous administration of fentanyl and sufentanil is less than that for morphine and meperidine, with peak analgesia being reached after about 5 minutes, as opposed to approximately 15 minutes. Recovery from analgesic effects also occurs more quickly. However, with larger doses or prolonged infusions, the effects of these drugs become more long lasting, with durations of action becoming similar to those of longer acting opioids (see below). Other CNS Effects As with other opioids, nausea, vomiting, and itching can be observed with fentanyl. Muscle rigidity, while possible after all narcotics, appears to be more common after administration of bolus doses of fentanyl or its congeners. This effect is felt to be centrally mediated and may be due in part to their increased potency relative to morphine. Rigidity can be mitigated by avoiding bolus dosing, slower administration of boluses, and by pretreatment with a nonopioid anesthetic induction agent. Rigidity can be treated with depolarizing or nondepolarizing neuromuscular blocking agents while controlling the patient's ventilation. Care must be taken to make sure the patient is not aware but unable to move. Respiratory depression is similar to that observed with other -receptor agonists, but the onset is more rapid. As with analgesia, respiratory depression after small doses is of shorter duration than with morphine, but of similar duration after large doses or long infusions. As with morphine and meperidine, delayed respiratory depression also can be seen after the use of fentanyl, sufentanil, or alfentanil, possibly due to enterohepatic circulation. High doses of fentanyl can cause neuroexcitation and, rarely, seizure-like activity in human beings (Bailey and Stanley, 1994).

Fentanyl has minimal effects on intracranial pressure when ventilation is controlled and the arterial CO2 concentration is not allowed to rise. Cardiovascular System Fentanyl and its derivatives decrease the heart rate and can mildly decrease blood pressure. However, these drugs do not release histamine and in general provide a marked degree of cardiovascular stability. Direct depressant effects on the myocardium are minimal. For this reason, high doses of fentanyl or sufentanil commonly are used as the primary anesthetic for patients undergoing cardiovascular surgery or for patients with poor cardiac function. Absorption, Fate, and Excretion These agents are highly lipid soluble and rapidly cross the bloodbrain barrier. This is reflected in the half-life for equilibration between the plasma and CSF of approximately 5 minutes for fentanyl and sufentanil. The levels in plasma and CSF rapidly decline due to redistribution of fentanyl from highly perfused tissue groups to other tissues, such as muscle and fat. As saturation of less-wellperfused tissue occurs, the duration of fentanyl's and sufentanil's effects approach the length of their elimination half lives of between 3 and 4 hours (Sanford and Gutstein, 1995). Fentanyl and sufentanil undergo hepatic metabolism and renal excretion. Therefore, with the use of higher doses or prolonged infusions, fentanyl and sufentanil become longer acting. Therapeutic Uses Fentanyl citrate (SUBLIMAZE) and sufentanil citrate (SUFENTA) have gained widespread popularity as anesthetic adjuvants (see Chapter 14: General Anesthetics). They commonly are used either intravenously, epidurally, or intrathecally. A formulation of fentanyl and droperidol (INNOVAR) was commonly used for anesthesia. However, dysphoric side effects of droperidol have limited the popularity of this combination. Epidural use of fentanyl and sufentanil for postoperative or labor analgesia has gained increasing popularity. A combination of epidural opioids with local anesthetics permits reduction in the dosage of both components, minimizing the side effects of both local anesthetics (i.e., motor blockade) and opioids (i.e., urinary retention, itching, and delayed respiratory depression in the case of morphine). Intravenous use of fentanyl and sufentanil for postoperative pain has been effective but limited by clinical concerns about muscle rigidity. However, the use of fentanyl and sufentanil in chronic pain treatment has become more widespread. Epidural and intrathecal infusions, both with and without local anesthetic, are used in the management of chronic malignant pain and selected cases of nonmalignant pain. Also, the development of novel, less invasive routes of administration for fentanyl has facilitated the use of these compounds in chronic pain management. Transdermal patches (DURAGESIC) that provide sustained release of fentanyl for 48 hours or more are available. However, factors promoting increased absorption (e.g., fever) can lead to relative overdosage and increased side effects (see also the section on Alternative Routes of Administration, below). Also, the FENTANYL ORALET, a formulation that permits rapid absorption of fentanyl through the buccal mucosa (much like a lollipop), was tried as an anesthetic premedicant but did not gain wide acceptance due to undesirable side effects in opioid-nave patients (nausea, vomiting, pruritus, and respiratory depression). A similar fentanyl product, ACTIQ , is available in higher strengths and is used for relief of breakthrough cancer pain (Ashburn et al., 1989). Alfentanil and Remifentanil These compounds were developed in an effort to create analgesics with a more rapid onset and

predictable termination of opioid effects. The potency of remifentanil is approximately equal to that of fentanyl and is between 20- and 30-times greater than that of alfentanil. The pharmacological properties of alfentanil and remifentanil are similar to those of fentanyl and sufentanil. They have similar incidences of nausea, vomiting, and dose-dependent muscle rigidity. Nausea, vomiting, itching, and headaches have been reported when remifentanil has been used for conscious analgesia for painful procedures. Intracranial pressure changes are minimal when ventilation is controlled. Seizures after remifentanil administration have not yet been reported. Absorption, Fate, and Excretion Both alfentanil and remifentanil have a more rapid onset of analgesic action than do fentanyl on sufentanil. Analgesic effects occur within of 1 to 1.5 minutes. After intravenous administration, alfentanil is metabolized in the liver similarly to fentanyl and sufentanil, with an elimination halflife of 1 to 2 hours. The duration of action of alfentanil is dependent on both the dose and length of administration. Remifentanil is unique in that it is metabolized by plasma esterases (Burkle et al., 1996). Elimination is independent of hepatic metabolism or renal excretion, and the elimination half-life is 8 to 20 minutes. There is no prolongation of effect with repeated dosing or prolonged infusion. Age and weight can affect clearance of remifentanil, requiring that dosage be reduced in the elderly and based on lean body mass. However, neither of these conditions causes major changes in duration of effect. After 3- to 5-hour infusions of remifentanil, recovery of respiratory function can be seen within 3 to 5 minutes, while full recovery from all effects of remifentanil is observed within 15 minutes (Glass et al., 1999). The primary metabolite, remifentanil acid, is 2000to 4000-times less potent than remifentanil and is renally excreted. Peak respiratory depression after bolus doses of remifentanil occurs after 5 minutes (Patel and Spencer, 1996). Therapeutic Uses Alfentanil hydrochloride (ALFENTA) and remifentanil hydrochloride (ULTIVA) are useful for short, painful procedures that require intense analgesia and blunting of stress responses. The titratability of remifentanil and its consistent, rapid offset make it ideally suited for short surgical procedures where rapid recovery is an issue. Remifentanil also has been used successfully for longer neurosurgical procedures, where rapid emergence from anesthesia is important. However, in cases where postprocedural analgesia is required, remifentanil alone is a poor choice. In this situation, either a longer-acting opioid or another analgesic modality should be combined with remifentanil for prolonged analgesia, or another opioid should be used. Alfentanil has been administered intraspinally for pain control. Remifentanil is presently not used intraspinally, as glycine in the drug vehicle can cause temporary motor paralysis. It is generally given by continuous intravenous infusion, as its short duration of action makes bolus administration impractical. Methadone and Congeners Methadone is a long-lasting -receptor agonist with pharmacological properties qualitatively similar to those of morphine. Chemistry Methadone has the following structural formula:

The analgesic activity of the racemate is almost entirely the result of its content of l-methadone, which is 8- to 50-times more potent than the d isomer; d-methadone also lacks significant respiratory depressant action and addiction liability, but it does possess antitussive activity. Pharmacological Actions The outstanding properties of methadone are its analgesic activity, its efficacy by the oral route, its extended duration of action in suppressing withdrawal symptoms in physically dependent individuals, and its tendency to show persistent effects with repeated administration. Miotic and respiratory-depressant effects can be detected for more than 24 hours after a single dose and, upon repeated administration, marked sedation is seen in some patients. Effects on cough, bowel motility, biliary tone, and the secretion of pituitary hormones are qualitatively similar to those of morphine. Absorption, Fate, and Excretion Methadone is well absorbed from the gastrointestinal tract and can be detected in plasma within 30 minutes after oral ingestion; it reaches peak concentrations at about 4 hours. After therapeutic doses, about 90% of methadone is bound to plasma proteins. Peak concentrations occur in the brain within 1 or 2 hours after subcutaneous or intramuscular administration, and this correlates well with the intensity and duration of analgesia. Methadone also can be absorbed from the buccal mucosa (Weinberg et al., 1988). Methadone undergoes extensive biotransformation in the liver. The major metabolites, the results of N-demethylation and cyclization to form pyrrolidines and pyrroline, are excreted in the urine and the bile along with small amounts of unchanged drug. The amount of methadone excreted in the urine is increased when the urine is acidified. The half-life of methadone is approximately 15 to 40 hours. Methadone appears to be firmly bound to protein in various tissues, including brain. After repeated administration there is gradual accumulation in tissues. When administration is discontinued, low concentrations are maintained in plasma by slow release from extravascular binding sites (see Kreek, 1979); this process probably accounts for the relatively mild but protracted withdrawal syndrome. Side Effects, Toxicity, Drug Interactions, and Precautions Side effects, toxicity, and conditions that alter sensitivity, as well as the treatment of acute intoxication, are similar to those described for morphine. During long-term administration, there may be excessive sweating, lymphocytosis, and increased concentrations of prolactin, albumin, and globulins in the plasma. Rifampin and phenytoin accelerate the metabolism of methadone and can

precipitate withdrawal symptoms (see Kreek, 1979). Tolerance and Physical Dependence Volunteer postaddicts who receive subcutaneous or oral methadone daily develop partial tolerance to the nauseant, anorectic, miotic sedative, respiratory-depressant, and cardiovascular effects of methadone. Tolerance develops more slowly to methadone than to morphine in some patients, especially with respect to the depressant effects. However, this may be related in part to cumulative effects of the drug or its metabolites. Tolerance to the constipating effect of methadone does not develop as fully as does tolerance to other effects. The behavior of addicts who use methadone parenterally is strikingly similar to that of morphine addicts, but many former heroin users treated with oral methadone show virtually no overt behavioral effects. Development of physical dependence during the long-term administration of methadone can be demonstrated by drug withdrawal or by administration of an opioid antagonist. Subcutaneous administration of 10 to 20 mg of methadone to former opioid addicts produces definite euphoria equal in duration to that caused by morphine, and its overall abuse potential is comparable to that of morphine. Therapeutic Uses The primary uses of methadone hydrochloride (DOLOPHINE , others) are relief of chronic pain, treatment of opioid abstinence syndromes, and treatment of heroin users. It is not widely used as an antiperistaltic agent. It should not be used in labor. Analgesia The onset of analgesia occurs 10 to 20 minutes following parenteral administration and 30 to 60 minutes after oral medication. The average minimal effective analgesic concentration in blood is about 30 ng/ml (Gourlay et al., 1986). The typical oral dose is 2.5 to 15 mg, depending on the severity of the pain and the response of the patient. The initial parenteral dose is usually 2.5 to 10 mg. Care must be taken when escalating the dosage, because of the prolonged half-life of the drug and its tendency to accumulate over a period of several days with repeated dosing. Despite its longer plasma half-life, the duration of the analgesic action of single doses is essentially the same as that of morphine. With repeated usage, cumulative effects are seen, so that either lower dosage or longer intervals between doses become possible. In contrast to morphine, methadone and many of its congeners retain a considerable degree of their effectiveness when given orally. In terms of total analgesic effects, methadone given orally is about 50% as effective as the same dose administered intramuscularly; however, the oral-parenteral potency ratio is considerably lower when peak analgesic effect is considered. In equianalgesic doses, the pattern and incidence of untoward effects caused by methadone and morphine are similar. Levomethadyl Acetate Levomethadyl acetate (l- -acetylmethadol; ORLAAM) is a congener of methadone that is approved for use in maintenance programs for the treatment of heroin addicts. The drug is thought to act, in part, by its conversion to active metabolites, which explains its slow onset and protracted duration of action. The slow onset of effect can be problematic in the treatment of addicts (see Chapter 24: Drug Addiction and Drug Abuse). In physically dependent subjects taking levomethadyl acetate, withdrawal symptoms are not perceived for 72 to 96 hours after the last oral dose. Most subjects are comfortable taking a single dose as infrequently as every 72 hours (see Ling et al., 1978). The d

isomer of methadyl acetate is inactive. Propoxyphene Propoxyphene is structurally related to methadone (see below). Its analgesic effect resides in the dextro isomer, d-propoxyphene (dextropropoxyphene). However, levopropoxyphene seems to have some antitussive activity. The structure of propoxyphene is shown below.

Pharmacological Actions Although slightly less selective than morphine, propoxyphene binds primarily to -opioid receptors and produces analgesia and other CNS effects that are similar to those seen with morphine-like opioids. It is likely that at equianalgesic doses the incidence of side effects such as nausea, anorexia, constipation, abdominal pain, and drowsiness would be similar to those of codeine. As an analgesic, propoxyphene is about one-half to two-thirds as potent as codeine given orally. Ninety to 120 mg of propoxyphene hydrochloride administered orally would equal the analgesic effects of 60 mg of codeine, a dose that usually produces about as much analgesia as 600 mg of aspirin. Combinations of propoxyphene and aspirin, like combinations of codeine and aspirin, afford a higher level of analgesia than does either agent given alone (Beaver, 1988). Absorption, Fate, and Excretion Following oral administration, concentrations of propoxyphene in plasma reach their highest values at 1 to 2 hours. There is great variability between subjects in the rate of clearance and the plasma concentrations that are achieved. The average half-life of propoxyphene in plasma after a single dose is from 6 to 12 hours, which is longer than that of codeine. In human beings, the major route of metabolism is N-demethylation to yield norpropoxyphene. The half-life of norpropoxyphene is about 30 hours, and its accumulation with repeated doses may be responsible for some of the observed toxicity (see Chan and Matzke, 1987). Toxicity Given orally, propoxyphene is approximately one-third as potent as orally administered codeine in depressing respiration. Moderately toxic doses usually produce CNS and respiratory depression, but with still-larger doses the clinical picture may be complicated by convulsions in addition to respiratory depression. Delusions, hallucinations, confusion, cardiotoxicity, and pulmonary edema also have been noted. Respiratory-depressant effects are significantly enhanced when ethanol or sedative-hypnotics are ingested concurrently. Naloxone antagonizes the respiratory-depressant,

convulsant, and some of the cardiotoxic effects of propoxyphene. Tolerance and Dependence Very large doses [800 mg of propoxyphene hydrochloride (DARVON, others) or 1200 mg of the napsylate (DARVON-N) per day] reduce the intensity of the morphine withdrawal syndrome somewhat less effectively than do 1500-mg doses of codeine. Maximal tolerated doses are equivalent to daily doses of 20 to 25 mg of morphine, given subcutaneously. The use of higher doses of propoxyphene is prevented by untoward side effects and the occurrence of toxic psychoses. Very large doses produce some respiratory depression in morphine-tolerant addicts, suggesting that cross-tolerance between propoxyphene and morphine is incomplete. Abrupt discontinuation of chronically administered propoxyphene hydrochloride (up to 800 mg per day, given for almost 2 months) results in mild abstinence phenomena, and large oral doses (300 to 600 mg) produce subjective effects that are considered pleasurable by postaddicts. The drug is quite irritating when administered either intravenously or subcutaneously, so that abuse by these routes results in severe damage to veins and soft tissues. Therapeutic Uses Propoxyphene is recommended for the treatment of mild-to-moderate pain. Given acutely, the commonly prescribed combination of 32 mg of propoxyphene with aspirin may not produce more analgesia than aspirin alone, and doses of 65 mg of the hydrochloride or 100 mg of the napsylate are suggested. Propoxyphene is most often given in combination with aspirin or acetaminophen. The wide popularity of propoxyphene in clinical situations in which codeine was once used is largely a result of unrealistic overconcern about the addictive potential of codeine. Acute Opioid Toxicity Acute opioid toxicity may result from clinical overdosage, accidental overdosage in addicts, or attempts at suicide. Occasionally, a delayed type of toxicity may occur from the injection of an opioid into chilled skin areas or in patients with low blood pressure and shock. The drug is not fully absorbed, and, therefore, a subsequent dose may be given. When normal circulation is established, an excessive amount may be absorbed suddenly. It is difficult to define the exact amount of any opioid that is toxic or lethal to human beings. Recent experiences with methadone indicate that, in nontolerant individuals, serious toxicity may follow the oral ingestion of 40 to 60 mg. Older literature suggests that, in the case of morphine, a normal, pain-free adult is not likely to die after oral doses of less than 120 mg or to have serious toxicity with less than 30 mg parenterally. Symptoms and Diagnosis The patient who has taken an overdose of an opioid usually is stuporous or, if a large overdose has been taken, may be in a profound coma. The respiratory rate will be very low or the patient may be apneic, and cyanosis may be present. As respiratory exchange decreases, blood pressure, at first likely to be near normal, will fall progressively. If adequate oxygenation is restored early, the blood pressure will improve; if hypoxia persists untreated, there may be capillary damage, and measures to combat shock may be required. The pupils will be symmetrical and pinpoint in size; however, if hypoxia is severe, they may be dilated. Urine formation is depressed. Body temperature falls, and the skin becomes cold and clammy. The skeletal muscles are flaccid, the jaw is relaxed, and the tongue may fall back and block the airway. Frank convulsions occasionally may be noted in infants and children. When death occurs, it is nearly always due to respiratory failure. Even if respiration is restored, death still may occur as a result of complications that develop during the period of coma,

such as pneumonia or shock. Noncardiogenic pulmonary edema is seen commonly with opioid poisoning. It probably is not due to contaminants or to anaphylactoid reactions, and it has been observed following toxic doses of morphine, methadone, propoxyphene, and uncontaminated heroin. The triad of coma, pinpoint pupils, and depressed respiration strongly suggests opioid poisoning. The finding of needle marks suggestive of addiction further supports the diagnosis. Mixed poisonings, however, are not uncommon. Examination of the urine and gastric contents for drugs may aid in diagnosis, but the results usually become available too late to influence treatment. Treatment The first step is to establish a patent airway and ventilate the patient. Opioid antagonists (see Opioid Antagonists) can produce dramatic reversal of the severe respiratory depression, and the antagonist naloxone (see below) is the treatment of choice. However, care should be taken to avoid precipitating withdrawal in dependent patients, who may be extremely sensitive to antagonists. The safest approach is to dilute the standard naloxone dose (0.4 mg) and slowly administer it intravenously, monitoring arousal and respiratory function. With care, it usually is possible to reverse the respiratory depression without precipitating a major withdrawal syndrome. If no response is seen with the first dose, additional doses can be given. Patients should be observed for rebound increases in sympathetic nervous system activity, which may result in cardiac arrhythmias and pulmonary edema (see Duthie and Nimmo, 1987). For reversing opioid poisoning in children, the initial dose of naloxone is 0.01 mg/kg. If no effect is seen after a total dose of 10 mg, one can reasonably question the accuracy of the diagnosis. Pulmonary edema sometimes associated with opioid overdosage may be countered by positive-pressure respiration. Tonic-clonic seizures, occasionally seen as part of the toxic syndrome with meperidine and propoxyphene, are ameliorated by treatment with naloxone. The presence of general CNS depressants does not prevent the salutary effect of naloxone, and in cases of mixed intoxications, the situation will be improved largely due to antagonism of the respiratory-depressant effects of the opioid. However, some evidence indicates that naloxone and naltrexone also may antagonize some of the depressant actions of sedative-hypnotics. One need not attempt to restore the patient to full consciousness. The duration of action of the available antagonists is shorter than that of many opioids; hence, patients must be watched carefully, lest they slip back into coma. This is particularly important when the overdosage is due to methadone or lacetylmethadol. The depressant effects of these drugs may persist for 24 to 72 hours, and fatalities have occurred as a result of premature discontinuation of naloxone. In cases of overdoses of these drugs, a continuous infusion of naloxone should be considered. Toxicity due to overdose of pentazocine and other opioids with mixed actions may require higher doses of naloxone. The pharmacological actions of opioid antagonists are discussed in more detail in Opioid Antagonists. Opioid Agonist/Antagonists and Partial Agonists The drugs described in this section differ from clinically used -opioid receptor agonists. Drugs such as nalbuphine and butorphanol are competitive -receptor antagonists but exert their analgesic actions by acting as agonists at receptors. Pentazocine qualitatively resembles these drugs, but it may be a weaker -receptor antagonist or partial agonist while retaining its -agonist activity. Buprenorphine, on the other hand, is a partial agonist at receptors. The stimulus for the development of mixed agonist/antagonist drugs was a need for analgesics with less respiratory depression and addictive potential. Currently, the clinical use of these compounds is limited by

undesirable side effects and by limited analgesic effects. Pentazocine Pentazocine was synthesized as part of a deliberate effort to develop an effective analgesic with little or no abuse potential. It has both agonistic actions and weak opioid antagonistic activity. The pharmacology of pentazocine has been reviewed by (Brogden et al., 1973). Chemistry Pentazocine is a benzomorphan derivative with the following structural formula:

The compound has a large substituent on the nitrogen atom that is analogous to position 17 of morphine. This structural feature is common to a number of opioids with antagonist or agonist/ antagonist activity. The analgesic and respiratory-depressant activity of the racemate is due mainly to the l isomer. Pharmacological Actions The pattern of CNS effects produced by pentazocine is generally similar to that of the morphinelike opioids, including analgesia, sedation, and respiratory depression. The analgesic effects of pentazocine are due to agonistic actions at -opioid receptors. Higher doses of pentazocine (60 to 90 mg) elicit dysphoric and psychotomimetic effects. The mechanisms responsible for these side effects are not known but might involve activation of supraspinal receptors, since it has been suggested that these untoward effects may be reversible by naloxone. The cardiovascular responses to pentazocine differ from those seen with typical -receptor agonists, in that high doses cause an increase in blood pressure and heart rate. In patients with coronary artery disease, pentazocine administered intravenously elevates mean aortic pressure, left ventricular enddiastolic pressure, and mean pulmonary artery pressure and causes an increase in cardiac work (Alderman et al., 1972; Lee et al., 1976). A rise in the concentrations of catecholamines in plasma may account for its effects on blood pressure. Pentazocine acts as a weak antagonist or partial agonist at -opioid receptors. Low doses (20 mg given parenterally) depress respiration as much as does 10 mg of morphine, but increasing the pentazocine dose does not produce a proportionate increase in respiratory depression. Pentazocine does not antagonize the respiratory depression produced by morphine. However, when given to patients dependent on morphine or other -receptor agonists, pentazocine may precipitate withdrawal. In patients tolerant to morphine-like opioids, pentazocine reduces the analgesia produced by their administration, even when clear-cut withdrawal symptoms are not precipitated. Ceiling effects for both analgesia and respiratory depression are observed above 50 to 100 mg of

pentazocine (Bailey and Stanley, 1994). Absorption, Fate, and Excretion Pentazocine is well absorbed from the gastrointestinal tract and from subcutaneous and intramuscular sites. Peak analgesia occurs 15 minutes to 1 hour after intramuscular administration and 1 to 3 hours after oral administration. The half-life in plasma is 4 to 5 hours. First-pass metabolism in the liver is extensive, and somewhat less than 20% of pentazocine enters the systemic circulation. Drug action is terminated by hepatic metabolism and renal excretion. Side Effects, Toxicity, and Precautions The most commonly reported untoward effects are sedation, sweating, and dizziness or lightheadedness; nausea also occurs, but vomiting is less common than with morphine. Psychotomimetic effects, such as uncontrollable or weird thoughts, anxiety, nightmares, and hallucinations, occur with parenteral doses above 60 mg. Epidemiological data suggest that overdose with pentazocine alone rarely causes death. High doses produce marked respiratory depression associated with increased blood pressure and tachycardia. The respiratory depression is antagonized by naloxone. Pentazocine is irritating when administered subcutaneously or intramuscularly. Repeated injections over long periods may cause extensive fibrosis of subcutaneous and muscular tissue. Patients who have been receiving opioids on a regular basis may experience abstinence signs and symptoms when given pentazocine. After an opioid-free interval of 1 to 2 days, it is usually possible to administer pentazocine without producing such withdrawal effects. Tolerance and Physical Dependence With frequent and repeated use, tolerance develops to the analgesic and subjective effects of pentazocine. However, pentazocine does not prevent or ameliorate the morphine withdrawal syndrome. Instead, when high doses of pentazocine are given to subjects dependent on morphine, it precipitates withdrawal symptoms because of its antagonistic actions at the receptor. After long-term administration (60 mg every 4 hours), postaddicts develop physical dependence that can be demonstrated by abrupt withdrawal or by the administration of naloxone. The withdrawal syndrome after chronic doses of more than 500 mg per day, although milder in intensity than withdrawal from morphine, includes abdominal cramps, anxiety, chills, elevated temperature, vomiting, lacrimation, and sweating. Pentazocine withdrawal symptoms can be managed by gradual reduction of pentazocine itself or by substitution of -receptor agonists, such as morphine or methadone. A syndrome of withdrawal from pentazocine also has been observed in neonates. Therapeutic Uses Pentazocine is used as an analgesic. Although the risk of drug dependence exists, it may be lower than that associated with the use of morphine-like drugs in similar circumstances. Because abuse patterns appear to be less likely to develop with oral administration, this route should be used whenever possible. Pentazocine lactate (TALWIN) is available as a solution for injection. In an effort to reduce the use of tablets as a source of injectable pentazocine, tablets for oral use now contain pentazocine hydrochloride (equivalent to 50 mg of the base) and naloxone hydrochloride (equivalent to 0.5 mg of the base; TALWIN NX). After oral ingestion, naloxone is destroyed rapidly by the liver; however,

if the material is dissolved and injected, the naloxone produces aversive effects in subjects dependent on opioids. Tablets containing mixtures of pentazocine with aspirin (TALWIN COMPOUND) or acetaminophen (TALCEN) also are available. In terms of analgesic effect, 30 to 60 mg of pentazocine given parenterally is approximately equivalent to 10 mg of morphine. An oral dose of about 50 mg of pentazocine results in analgesia equivalent to that produced by 60 mg of codeine orally. Nalbuphine Nalbuphine is related structurally to both naloxone and oxymorphone (see Table 235). It is an agonist/antagonist opioid with a spectrum of effects that qualitatively resembles that of pentazocine; however, nalbuphine is a more potent antagonist at receptors and is less likely to produce dysphoric side effects than is pentazocine. Pharmacological Actions and Side Effects An intramuscular dose of 10 mg of nalbuphine is equianalgesic to 10 mg of morphine, with similar onset and duration of both analgesic and subjective effects. Nalbuphine depresses respiration as much as do equianalgesic doses of morphine. However, nalbuphine exhibits a ceiling effect, such that increases in dosage beyond 30 mg produce no further respiratory depression. However, a ceiling effect for analgesia also is reached at this point. In contrast to pentazocine and butorphanol, 10 mg of nalbuphine given to patients with stable coronary artery disease does not produce an increase in cardiac index, pulmonary arterial pressure, or cardiac work, and systemic blood pressure is not significantly altered; these indices also are relatively stable when nalbuphine is given to patients with acute myocardial infarction (see Roth et al., 1988). Its gastrointestinal effects are probably similar to those of pentazocine. Nalbuphine produces few side effects at doses of 10 mg or less; sedation, sweating, and headache are the most common. At much higher doses (70 mg), psychotomimetic side effects (dysphoria, racing thoughts, and distortions of body image) can occur. Nalbuphine is metabolized in the liver and has a half-life in plasma of 2 to 3 hours. Given orally, nalbuphine is 20% to 25% as potent as when given intramuscularly. Tolerance and Physical Dependence In subjects dependent on low doses of morphine (60 mg per day), nalbuphine precipitates an abstinence syndrome. Prolonged administration of nalbuphine can produce physical dependence. The withdrawal syndrome is similar in intensity to that seen with pentazocine. The potential for abuse of parenteral nalbuphine in subjects not dependent on -receptor agonists is probably similar to that of parenteral pentazocine. Therapeutic Uses Nalbuphine hydrochloride (NUBAIN) is used to produce analgesia. Because it is an agonist/antagonist, administration to patients who have been receiving morphine-like opioids may create difficulties unless a brief drug-free interval is interposed. The usual adult dose is 10 mg parenterally every 3 to 6 hours; this may be increased to 20 mg in nontolerant individuals. Butorphanol Butorphanol is a morphinan congener with a profile of actions similar to those of pentazocine. The structural formula of butorphanol is shown in Table 235.

Pharmacological Actions and Side Effects In postoperative patients, a parenteral dose of 2 to 3 mg of butorphanol produces analgesia and respiratory depression approximately equal to that produced by 10 mg of morphine or 80 to 100 mg of meperidine; the onset, peak, and duration of action are similar to those that follow the administration of morphine. The plasma half-life of butorphanol is about 3 hours. Like pentazocine, analgesic doses of butorphanol produce an increase in pulmonary arterial pressure and in the work of the heart; systemic arterial pressure is slightly decreased (Popio et al., 1978). The major side effects of butorphanol are drowsiness, weakness, sweating, feelings of floating, and nausea. While the incidence of psychotomimetic side effects is lower than that with equianalgesic doses of pentazocine, they are qualitatively similar. Physical dependence on butorphanol can occur. Therapeutic Uses Butorphanol tartrate (STADOL) is better suited for the relief of acute rather than chronic pain. Because of its side effects on the heart, it is less useful than morphine or meperidine in patients with congestive heart failure or myocardial infarction. The usual dose is between 1 and 4 mg of the tartrate given intramuscularly, or 0.5 to 2 mg given intravenously every 3 to 4 hours. A nasal formulation (STADOL NS) is available and has proven to be effective. This formulation is particularly useful for patients with severe headaches who may be unresponsive to other forms of treatment. Buprenorphine Buprenorphine is a semisynthetic, highly lipophilic opioid derived from thebaine (see Table 235). It is 25 to 50 times more potent than morphine. Pharmacological Actions and Side Effects Buprenorphine produces analgesia and other CNS effects that are qualitatively similar to those of morphine. About 0.4 mg of buprenorphine is equianalgesic with 10 mg of morphine given intramuscularly (Wallenstein et al., 1986). Although variable, the duration of analgesia is usually longer than that of morphine. Some of the subjective and respiratory-depressant effects are unequivocally slower in onset and longer lasting than those of morphine. For example, peak miosis occurs about 6 hours after intramuscular injection, while maximal respiratory depression is observed at about 3 hours. Buprenorphine appears to be a partial -receptor agonist. Depending on the dose, buprenorphine may cause symptoms of abstinence in patients who have been receiving -receptor agonists (morphine-like drugs) for several weeks. It antagonizes the respiratory depression produced by anesthetic doses of fentanyl about as well as does naloxone, without completely preventing opioid pain relief (Boysen et al., 1988). Although respiratory depression has not been a major problem in clinical trials, it is not clear whether or not there is a ceiling for this effect (as seen with nalbuphine and pentazocine). The respiratory depression and other effects of buprenorphine can be prevented by prior administration of naloxone, but they are not readily reversed by high doses of naloxone once the effects have been produced. This suggests that buprenorphine dissociates very slowly from opioid receptors. The half-life for dissociation from the receptor is 166 minutes for buprenorphine, as opposed to 7 minutes for fentanyl (Boas and Villiger, 1985). Therefore, plasma levels of buprenorphine may not parallel clinical effects. Cardiovascular and other side effects (sedation, nausea, vomiting, dizziness, sweating, and headache) appear to be similar to those of morphine-like

opioids. Buprenorphine is relatively well absorbed by most routes. Administered sublingually, the drug (0.4 to 0.8 mg) produces satisfactory analgesia in postoperative patients. Concentrations in blood peak within 5 minutes after intramuscular injection and within 1 to 2 hours after oral or sublingual administration. While the half-life in plasma has been reported to be about 3 hours, this value bears little relationship to the rate of disappearance of effects (see above). Both N-dealkylated and conjugated metabolites are detected in the urine, but most of the drug is excreted unchanged in the feces. About 96% of the circulating drug is bound to protein. Physical Dependence When buprenorphine is discontinued, a withdrawal syndrome develops that is delayed in onset for 2 days to 2 weeks; this consists of typical, but generally not very severe, morphine-like withdrawal signs and symptoms, and it persists for about 1 to 2 weeks (Bickel et al., 1988; Fudala et al., 1989). Therapeutic Uses Buprenorphine. (BUPRENEX ) may be used as an analgesic and also has proven to be useful as a maintenance drug for opioid-dependent subjects (Johnson et al., 2000). The drug was approved provisionally for use in the treatment of heroin addiction when the Drug Addiction Treatment Act was passed by the United States Congress and signed by the President in October of 2000. Approval by the Food and Drug Administration is pending. The usual intramuscular or intravenous dose for analgesia is 0.3 mg, given every 6 hours. Sublingual doses of 0.4 to 0.8 mg produce effective analgesia, and doses of 6 to 8 mg appear to be about equal to 60 mg of methadone as a maintenance agent. Other Agonist/Antagonists Meptazinol is an agonist/antagonist opioid that is about one-tenth as potent as morphine in producing analgesia. Its duration of action is somewhat shorter than that of morphine. Meptazinol also has cholinergic actions that may contribute to its analgesic effects (see Holmes and Ward, 1985). Nevertheless, its analgesic actions are antagonized by naloxone, and it can precipitate withdrawal in animals dependent on -receptor agonists. The potential for abuse of meptazinol is less than that of morphine because dysphoric side effects appear when the dose is increased. Dezocine (DALGAN ), an aminotetralin, is another agonist/antagonist; its potency and duration of analgesic effect are similar to those of morphine. Increasing the dose above 30 mg does not produce progressively more severe respiratory depression. In postaddicts, its subjective effects are similar to those of -agonist opioids (Jasinski and Preston, 1985). Opioid Antagonists Under ordinary circumstances, the drugs to be discussed in this section produce few effects unless opioids with agonistic actions have been administered previously. However, when the endogenous opioid systems are activated, as in shock or certain forms of stress, the administration of an opioid antagonist alone may have visible consequences. These agents have obvious therapeutic utility in the treatment of overdosage with opioids. As the understanding of the role of endogenous opioid systems in pathophysiological states increases, additional therapeutic indications for these antagonists may develop.

Chemistry Relatively minor changes in the structure of an opioid can convert a drug that is primarily an agonist into one with antagonistic actions at one or more types of opioid receptors. The most common such substitution is that of a larger moiety (e.g., an allyl or methylcyclopropyl group) for the N-methyl group that is typical of the -receptor agonists. Such substitutions transform morphine to nalorphine, levorphanol to levallorphan, and oxymorphone to naloxone or naltrexone (see Table 235). In some cases, congeners are produced that are competitive antagonists at receptors but that also have agonistic actions at receptors. Nalorphine and levallorphan have such properties. Other congeners, especially naloxone and naltrexone, appear to be devoid of agonistic actions and probably interact with all types of opioid receptors, albeit with widely different affinities (see Martin, 1983). Nalmefene (REVIX) is a relatively pure -receptor antagonist that is more potent than naloxone (Dixon et al., 1986). A number of other nonpeptide antagonists have been developed that are relatively selective for individual types of opioid receptors. These include cypridime and funaltrexamine ( -FNA) ( ), naltrindole ( ), and nor-binaltorphimine ( ) (see Portoghese, 1989; Pasternak, 1993). Pharmacological Properties If endogenous opioid systems have not been activated, the pharmacological actions of opioid antagonists depend on whether or not an opioid agonist has been administered previously, on the pharmacological profile of that opioid, and on the degree to which physical dependence on an opioid has developed. Effects in the Absence of Opioid Drugs Subcutaneous doses of naloxone (NARCAN ; up to 12 mg) produce no discernible subjective effects in human beings, and 24 mg causes only slight drowsiness. Naltrexone (REVIA) also appears to be a relatively pure antagonist but with higher oral efficacy and a longer duration of action. At high doses, both naloxone and naltrexone may have some special agonistic effects. However, these are of little clinical significance. At doses in excess of 0.3 mg/kg of naloxone, normal subjects show increased systolic blood pressure and decreased performance on tests of memory. High doses of naltrexone appeared to cause mild dysphoria in one study but almost no subjective effect in several others (see Gonzalez and Brogden, 1988). Although high doses of antagonists might be expected to alter the actions of endogenous opioid peptides, the detectable effects are usually both subtle and limited (Cannon and Liebeskind, 1987). Most likely, this reflects the low levels of tonic activity of the opioid systems. In this regard, analgesic effects can be differentiated from endocrine effects, in which naloxone causes readily demonstrable changes in hormone levels (see below). It is interesting that naloxone appears to block the analgesic effects of placebo medications and acupuncture. In laboratory animals, the administration of naloxone will reverse or attenuate the hypotension associated with shock of diverse origins including that caused by anaphylaxis, endotoxin, hypovolemia, and injury to the spinal cord; opioid agonists aggravate these conditions (Amir, 1988). Naloxone apparently acts to antagonize the actions of endogenous opioids that are mobilized by pain or stress and that are involved in the regulation of blood pressure by the CNS. Although neural damage that follows trauma to the spinal cord or cerebral ischemia also appears to involve endogenous opioids, it is not certain whether opioid antagonists can prevent damage to these or other organs and/or increase rates of survival. Nevertheless, opioid antagonists can reduce the extent of injury in some animal models,

perhaps by blocking receptors (Faden, 1988). As noted above, endogenous opioid peptides participate in the regulation of pituitary secretion, apparently by exerting tonic inhibitory effects on the release of certain hypothalamic hormones (see Chapter 56: Pituitary Hormones and Their Hypothalamic Releasing Factors). Thus, the administration of naloxone or naltrexone increases the secretion of gonadotropin-releasing hormone and corticotropin-releasing factor and elevates the plasma concentrations of LH, FSH, and ACTH, as well as the hormones produced by their target organs. Antagonists do not consistently alter basal or stress-induced concentrations of prolactin in plasma in men; paradoxically, naloxone stimulates the release of prolactin in women. Opioid antagonists augment the increases in plasma concentrations of cortisol and catecholamines that normally accompany stress or exercise. The neuroendocrine effects of opioid antagonists have been reviewed (Howlett and Rees, 1986). Endogenous opioid peptides probably have some role in the regulation of feeding or energy metabolism, because opioid antagonists increase energy expenditure and interrupt hibernation in appropriate species and induce weight loss in genetically obese rats. The antagonists also prevent stress-induced overeating and obesity in rats. These observations have led to the experimental use of opioid antagonists in the treatment of human obesity, especially that associated with stressinduced eating disorders. However, naltrexone does not accelerate weight loss in very obese subjects, even though short-term administration of opioid antagonists reduce food intake in both lean and obese individuals (Atkinson, 1987). Antagonistic Actions Small doses (0.4 to 0.8 mg) of naloxone given intramuscularly or intravenously prevent or promptly reverse the effects of -receptor agonists. In patients with respiratory depression, an increase in respiratory rate is seen within 1 or 2 minutes. Sedative effects are reversed, and blood pressure, if depressed, returns to normal. Higher doses of naloxone are required to antagonize the respiratorydepressant effects of buprenorphine; 1 mg of naloxone intravenously completely blocks the effects of 25 mg of heroin. Naloxone reverses the psychotomimetic and dysphoric effects of agonist/antagonist agents such as pentazocine, but much higher doses (10 to 15 mg) are required. The duration of antagonistic effects depends on the dose but is usually 1 to 4 hours. Antagonism of opioid effects by naloxone often is accompanied by "overshoot" phenomena. For example, respiratory rate depressed by opioids transiently becomes higher than that prior to the period of depression. Rebound release of catecholamines may cause hypertension, tachycardia, and ventricular arrhythmias. Pulmonary edema also has been reported after naloxone administration. Effects in Physical Dependence In subjects who are dependent on morphine-like opioids, small subcutaneous doses of naloxone (0.5 mg) precipitate a moderate-to-severe withdrawal syndrome that is very similar to that seen after abrupt withdrawal of opioids, except that the syndrome appears within minutes after administration and subsides in about 2 hours. The severity and duration of the syndrome are related to the dose of the antagonist and to the degree and type of dependence. Higher doses of naloxone will precipitate a withdrawal syndrome in patients dependent on pentazocine, butorphanol, or nalbuphine. Naloxone produces "overshoot" phenomena suggestive of early acute physical dependence 6 to 24 hours after a single dose of a agonist (see Heishman et al., 1989). Tolerance and Physical Dependence Even after prolonged administration of high doses, discontinuation of naloxone is not followed by any recognizable withdrawal syndrome, and the withdrawal of naltrexone, another relatively pure

antagonist, produces very few signs and symptoms. However, long-term administration of antagonists increases the density of opioid receptors in the brain and causes a temporary exaggeration of responses to the subsequent administration of opioid agonists (Yoburn et al., 1988). Naltrexone and naloxone have little or no potential for abuse. Absorption, Fate, and Excretion Although absorbed readily from the gastrointestinal tract, naloxone is almost completely metabolized by the liver before reaching the systemic circulation and thus must be administered parenterally. The drug is absorbed rapidly from parenteral sites of injection and is metabolized in the liver, primarily by conjugation with glucuronic acid; other metabolites are produced in small amounts. The half-life of naloxone is about 1 hour, but its clinically effective duration of action can be even less. Compared with naloxone, naltrexone retains much more of its efficacy by the oral route, and its duration of action approaches 24 hours after moderate oral doses. Peak concentrations in plasma are reached within 1 to 2 hours and then decline with an apparent half-life of approximately 3 hours; this value does not change with long-term use. Naltrexone is metabolized to 6-naltrexol, which is a weaker antagonist but has a longer half-life of about 13 hours. Naltrexone is much more potent than naloxone, and l00-mg oral doses given to patients addicted to opioids produce concentrations in tissues sufficient to block the euphorigenic effects of 25-mg intravenous doses of heroin for 48 hours (see Gonzalez and Brogden, 1988). Therapeutic Uses Opioid antagonists have established uses in the treatment of opioid-induced toxicity, especially respiratory depression; in the diagnosis of physical dependence on opioids; and as therapeutic agents in the treatment of compulsive users of opioids, as discussed in Chapter 24: Drug Addiction and Drug Abuse. Their potential utility in the treatment of shock, stroke, spinal cord and brain trauma, and other disorders that may involve mobilization of endogenous opioid peptides remains to be established. Naltrexone is approved by the United States Food and Drug Administration for treatment of alcoholism (see Chapters 18: Ethanol and 24: Drug Addiction and Drug Abuse). Treatment of Opioid Overdosage Naloxone hydrochloride is used to treat opioid overdose. As discussed earlier, it acts rapidly to reverse the respiratory depression associated with high doses of opioids. However, it should be used cautiously, since it also can precipitate withdrawal in dependent subjects and cause undesirable cardiovascular side effects. By carefully titrating the dose of naloxone, it usually is possible to antagonize the respiratory-depressant actions without eliciting a full withdrawal syndrome. The duration of action of naloxone is relatively short, and it often must be given repeatedly or by continuous infusion. Opioid antagonists also have been effectively employed to decrease neonatal respiratory depression secondary to the intravenous or intramuscular administration of opioids to the mother. In the neonate, the initial dose is 10 g/kg, given intravenously, intramuscularly, or subcutaneously. Centrally Active Antitussive Agents Cough is a useful physiological mechanism that serves to clear the respiratory passages of foreign material and excess secretions. It should not be suppressed indiscriminately. There are, however, many situations in which cough does not serve any useful purpose but may, instead, only annoy the

patient or prevent rest and sleep. Chronic cough can contribute to fatigue, especially in elderly patients. In such situations the physician should use a drug that will reduce the frequency or intensity of the coughing. The cough reflex is complex, involving the central and peripheral nervous systems as well as the smooth muscle of the bronchial tree. It has been suggested that irritation of the bronchial mucosa causes bronchoconstriction, which, in turn, stimulates cough receptors (which probably represent a specialized type of stretch receptor) located in tracheobronchial passages. Afferent conduction from these receptors is via fibers in the vagus nerve; central components of the reflex probably involve several mechanisms or centers that are distinct from the mechanisms involved in the regulation of respiration. The drugs that directly or indirectly can affect this complex mechanism are diverse. For example, cough may be the first or only symptom in bronchial asthma or allergy, and in such cases bronchodilators (e.g., 2-adrenergic receptor agonists; see Chapter 10: Catecholamines, Sympathomimetic Drugs, and Adrenergic Receptor Antagonists) have been shown to reduce cough without having any significant central effects; other drugs act primarily on the central or the peripheral nervous system components of the cough reflex. The early literature on antitussives has been reviewed by Eddy et al. (1969). A number of drugs are known to reduce cough as a result of their central actions, although the exact mechanisms are still not entirely clear. Included among them are the opioid analgesics discussed above (codeine and hydrocodone are the opioids most commonly used to suppress cough), as well as a number of nonopioid agents. Cough suppression often occurs with lower doses of opioids than those needed for analgesia. A 10- or 20-mg oral dose of codeine, although ineffective for analgesia, produces a demonstrable antitussive effect, and higher doses produce even more suppression of chronic cough. In selecting a specific centrally active agent for a particular patient, the significant considerations are its antitussive efficacy against pathological cough and the incidence and type of side effects to be expected. In the majority of situations requiring a cough suppressant, liability for abuse need not be a major consideration. Most of the nonopioid agents now offered as antitussives are effective against cough induced by a variety of experimental techniques. However, the ability of these tests to predict clinical efficacy is limited. Dextromethorphan Dextromethorphan (d-3-methoxy- N-methylmorphinan) is the d isomer of the codeine analog methorphan; however, unlike the l isomer, it has no analgesic or addictive properties and does not act through opioid receptors. The drug acts centrally to elevate the threshold for coughing. Its effectiveness in patients with pathological cough has been demonstrated in controlled studies; its potency is nearly equal to that of codeine. Compared with codeine, dextromethorphan produces fewer subjective and gastrointestinal side effects (Matthys et al., 1983). In therapeutic dosages, the drug does not inhibit ciliary activity, and its antitussive effects persist for 5 to 6 hours. Its toxicity is low, but extremely high doses may produce CNS depression. Sites that bind dextromethorphan with high affinity have been identified in membranes from various regions of the brain (Craviso and Musacchio, 1983). Although dextromethorphan is known to function as an NMDA-receptor antagonist, these binding sites are not limited to the known distribution of NMDA receptors (Elliott et al., 1994). Thus, the mechanism by which dextromethorphan exerts its antitussive effects is still unclear. Two other known antitussives, carbetapentane and caramiphen, also bind avidly to this site, but codeine, levopropoxyphene, and other antitussive opioids (as well as naloxone) are not bound. Although noscapine (see below)

enhances the affinity of dextromethorphan, it appears to interact with distinct binding sites (Karlsson et al., 1988). The relationship of these binding sites to antitussive actions is not known; however, these observations, coupled with the ability of naloxone to antagonize the antitussive effects of codeine but not those of dextromethorphan, indicate that cough suppression can be achieved by a number of different mechanisms. The average adult dosage of dextromethorphan hydrobromide is 10 to 30 mg three to six times daily; however, as is the case with codeine, higher doses often are required. The drug is generally marketed for "over-the-counter" sale in numerous syrups and lozenges or in combinations with antihistamines and other agents. Other Drugs Levopropoxyphene napsylate, the l-isomer of dextropropoxyphene, in doses of 50 to 100 mg orally, appears to suppress cough to about the same degree as does 30 mg of dextromethorphan. Unlike dextropropoxyphene, levopropoxyphene has little or no analgesic activity. Noscapine is a naturally occurring opium alkaloid of the benzylisoquinoline group; except for its antitussive effect, it has no significant actions on the CNS in doses within the therapeutic range. The drug is a potent releaser of histamine, and large doses cause bronchoconstriction and transient hypotension. Other drugs that have been used as centrally acting antitussives include carbetapentane, caramiphen, chlophedianol, diphenhydramine, and glaucine. Each is a member of a distinct pharmacological class unrelated to the opioids. The mechanism of action of diphenhydramine, an antihistamine, is unclear. Although sedative effects are common, paradoxical excitement may be seen in infants; dryness of mucous membranes caused by anticholinergic effects and thickening of mucus may be a disadvantage. In general, the toxicity of these agents is low, but controlled clinical studies are still insufficient to determine whether or not they merit consideration as alternatives to more thoroughly studied agents. Pholcodine [3-O-(2-morpholinoethyl)morphine] is used clinically in many countries outside the United States. Although structurally related to the opioids, it has no opioid-like actions because the substitution at the 3-position is not removed by metabolism. Pholcodine is at least as effective as codeine as an antitussive; it has a long half-life and can be given once or twice daily (see Findlay, 1988). Benzonatate (TESSALON) is a long-chain polyglycol derivative chemically related to procaine and believed to exert its antitussive action on stretch or cough receptors in the lung, as well as by a central mechanism. It has been administered by all routes; the oral dosage is 100 mg three times daily, but higher doses have been used. Therapeutic Uses of Opioid Analgesics Sir William Osler called morphine "God's own medicine." Opioids are still the mainstay of pain treatment. However, the development of new analgesic compounds and new routes of administration have increased the therapeutic options available to clinicians, while at the same time helping to minimize undesirable side effects. In this section, we will outline guidelines for rational drug selection, discuss routes of administration other than the standard oral and parenteral methods, and outline general principles for the use of opioids in acute and chronic pain states. Extensive efforts by many individuals and organizations have resulted in the publication of many useful guidelines for the administration of opioids. These have been developed for a number of

clinical situations, including acute pain, trauma, cancer, nonmalignant chronic pain, and treatment of pain in children (Agency for Health Care Policy and Research, 1992a, 1992b, 1994; International Association for the Study of Pain, 1992; American Pain Society, 1999; Grossman et al., 1999; World Health Organization, 1998; Berde et al., 1990). These guidelines provide comprehensive discussions of dosing regimens and drug selection and also provide protocols for the management of complex conditions. In the case of cancer pain, adherence to standardized protocols for cancer pain management (Agency for Health Care Policy and Research, 1994) has been shown to improve pain management significantly (Du Pen et al., 1999). Guidelines for the oral and parenteral dosing of commonly used opioids are presented in Table 236. These guidelines are for acute pain management in opioid-nave patients. Adjustments will need to be made for use in opioid-tolerant patients and in chronic pain states. For children under 6 months of age, especially those who are ill or premature, expert consultation should be obtained. The pharmacokinetics and potency of opioids can be substantially altered in these patients, and in some cases there is a significant risk of apnea. It also should be noted that there is substantial individual variability in responses to opioids. A standard intramuscular dose of 10 mg of morphine sulfate will relieve severe pain adequately in only 2 of 3 patients. Adjustments will have to be made based on clinical response. In general, it is recommended that opioids always be combined with other analgesic agents, such as nonsteroidal anti-inflammatory drugs (NSAIDS) or acetaminophen. In this way, one can take advantage of additive analgesic effects and minimize the dose of opioids and thus undesirable side effects. In some situations, NSAIDS can provide analgesia equal to that produced by 60 mg of codeine. Potentiation of opioid action by NSAIDs may be due to increased conversion of arachidonic acid to 12-lipoxygenase products that facilitate effects of opioids on K+ channels (Vaughan et al., 1997). This "opioid-sparing" strategy is the backbone of the "analgesic ladder" for pain management proposed by the World Health Organization (1990). Weaker opioids can be supplanted by stronger opioids in cases of moderate and severe pain. In addition, analgesics always should be dosed in a continuous or "around the clock" fashion rather than on an as needed basis for chronic severe pain. This provides more consistent analgesic levels and avoids unnecessary suffering. Knowledge of the pharmacological profiles of analgesics allows the rational selection of dosing intervals without risk of overdosage. Factors guiding the selection of specific opioid compounds for pain treatment include potency, pharmacokinetic characteristics, and the routes of administration available. A more potent compound could be useful when high doses of opioid are required, so the medicine can be given in a smaller volume. Duration of action also is an important consideration. For example, a long-acting opioid such as methadone may be appropriate when less-frequent dosing is desired. For short, painful procedures, a quick-acting, fast-dissipating compound such as remifentanil would be a useful choice. In special cases, where a lower addiction risk is required or in patients unable to tolerate other opioids, a partial agonist or mixed agonist/antagonist compound might be a rational choice. The properties of some commonly used orally-administered opioids are discussed in more detail below. Morphine is available for oral use in standard and controlled-release preparations. Due to first-pass metabolism, morphine is two- to sixfold less potent orally than parenterally. This is important to remember when converting a patient from parenteral to oral medication. There is wide variability in the first-pass metabolism, and the dose should be titrated to the patient's needs. In children who weigh less than 50 kg, morphine can be given at 0.1 mg/kg every 3 to 4 hours parenterally or at 0.3 mg/kg orally.

Codeine is widely used only due to its high oral/parenteral potency ratio. Orally, codeine at 30 mg is approximately equianalgesic to 325 to 600 mg of aspirin. Combinations of codeine with aspirin or acetaminophen usually provide additive actions, and at these doses analgesic efficacy can exceed that of 60 mg of codeine (see Beaver, 1988). Many drugs can be used instead of either morphine or codeine, as shown in Table 236. Oxycodone, with its high oral/parenteral potency ratio, is widely used in combination with aspirin (PERCODAN , others) or acetaminophen (PERCOCET 2.5/325, others), although it is available alone (ROXICODINE, others). Heroin (diacetylmorphine) is not available for therapeutic use in the United States, although it has been used in the United Kingdom. Given intramuscularly, it is approximately twice as potent as morphine. Pharmacologically, heroin is very similar to morphine and does not appear to have any unique therapeutic advantages over the available opioids (Sawynok, 1986; Kaiko et al., 1981). It also may be helpful to employ other agents (adjuvants) that enhance opioid analgesia and that may add beneficial effects of their own. For example, the combination of an opioid with a small dose of amphetamine may augment analgesia while reducing the sedative effects. Certain antidepressants, such as amitriptyline and desipramine, also may enhance opioid analgesia, and they may have analgesic actions in some types of neuropathic (deafferentation) pain (see McQuay, 1988). Other potentially useful adjuvants include certain antihistamines, anticonvulsants such as carbamazepine and phenytoin, and glucocorticoids. Alternative Routes of Administration In addition to the traditional oral and parenteral formulations for opioids, many other methods have been developed in an effort to improve therapeutic efficacy while minimizing side effects. These routes also improve the ease of use of opioids, and increase patient satisfaction. Patient-Controlled Analgesia (PCA) With this modality, the patient has limited control of the dosing of opioid from an infusion pump within tightly mandated parameters. PCA can be used for intravenous or epidural infusion. This technique avoids any delays in administration and permits greater dosing flexibility than other regimens, better adapting to individual differences in responsiveness to pain and to opioids. It also gives the patient a greater sense of control. With shorter-acting opioids, serious toxicity or excessive use rarely occurs. An early concern that self-administration of opioids would increase the probability of addiction has not materialized. PCA is suitable for both adults and children, and it is preferred over intramuscular injections for postoperative pain control (Rodgers et al., 1988). Computer-Assisted Continuous Infusion (CACI) The idea behind this mode of administration is to enable clinicians to titrate intravenous agents in a fashion similar to that used in delivering volatile agents (Sanford and Gutstein, 1995). CACI based on detailed pharmacokinetic models has been used successfully to administer opioids (Shafer et al., 1990; Bailey et al., 1993). However, true "closed-loop" control of opioid administration requires the capability of continuously measuring plasma opioid levels with indwelling sensors. Until such realtime measurement is available, accurate assessment of dose-effect relationships in patients is not possible. Intraspinal Infusion Administration of opioids into the epidural or intrathecal space provides more direct access to the first pain-processing synapse in the dorsal horn of the spinal cord. This permits the use of doses

substantially lower than those required for oral or parenteral administration (see Table 237). Systemic side effects are thus decreased. However, epidural opioids have their own dose-dependent side effects, such as itching, nausea, vomiting, respiratory depression, and urinary retention. The use of hydrophilic opioids such as preservative-free morphine (DURAMORPH, others) permits more rostral spread of the compound, allowing it to directly affect supraspinal sites. As a consequence, after intraspinal morphine, delayed respiratory depression can be observed for as long as 24 hours after a bolus dose. While the risk of delayed respiratory depression is reduced with more lipophilic opioids, it is not eliminated. Extreme vigilance and appropriate monitoring are required for all patients receiving intraspinal narcotics. Nausea and vomiting also are more prominent symptoms with intraspinal morphine. However, supraspinal analgesic centers also can be stimulated, possibly leading to synergistic analgesic effects. Analogous to the relationship between systemic opioids and NSAIDS, intraspinal narcotics often are combined with local anesthetics. This permits the use of lower concentrations of both agents, minimizing local anesthetic-induced complications of motor blockade and the opioid-induced complications listed above. Epidural administration of opioids has become popular in the management of postoperative pain and for providing analgesia during labor and delivery. Lower systemic opioid levels are achieved with epidural opioids, leading to less placental transfer and less potential for respiratory depression of the newborn (Shnider and Levinson, 1987). Intrathecal ("spinal" anesthesia) administration of opioids as a single bolus also is popular for acute pain management. Chronic intrathecal infusions generally are reserved for use in chronic pain patients. Peripheral Analgesia As previously mentioned, opioid receptors on peripheral nerves have been shown to respond to locally applied opioids during inflammation (Stein, 1995). Peripheral analgesia permits the use of lower doses, applied locally, than those necessary to achieve a systemic effect. The effectiveness of this technique has been demonstrated in studies of postoperative pain (Stein et al., 1991). These studies also suggest that peripherally acting opioid compounds would be effective in other selected circumstances without entering the CNS to cause many undesirable side effects. Development of such compounds and expansion of clinical applications of this technique currently are active areas of research. Rectal Administration This route is an alternative for patients with difficulty swallowing or other oral pathology and who prefer a less-invasive route than parenteral (De Conno et al., 1995). This route is not well tolerated in most children. Onset of action is seen within 10 minutes. In the United States, morphine, hydromorphone, and oxymorphone are available in rectal suppository formulation (American Pain Society, 1999). Administration by Inhalation Preliminary studies have shown that opioids delivered by nebulizer can be an effective means of analgesic drug delivery (Worsley et al., 1990; Higgins et al., 1991). However, constant supervision is required when administering the drug, and variable delivery to the lungs can cause differences in therapeutic effect. In addition, possible environmental contamination is a concern. However, development of the inhaled route could provide a more convenient and cost-effective, adjunctive method of analgesic delivery for patients experiencing chronic pain.

Oral Transmucosal Administration Opioids can be absorbed through the oral mucosa more rapidly than through the stomach. Bioavailability is greater due to avoidance of first pass metabolism, and lipophilic opioids are better absorbed by this route than are hydrophilic compounds such as morphine (Weinberg et al., 1988). A transmucosal delivery system that suspends fentanyl in a dissolvable matrix has been approved for clinical use (ACTIQ ). Its primary indication is for treatment of breakthrough cancer pain (Ashburn et al., 1989). In this setting, transmucosal fentanyl relieves pain within 15 minutes, and patients easily can titrate the appropriate dose. Transmucosal fentanyl also has been studied as a premedicant for children. However, this technique has been largely abandoned due to a substantial incidence of undesirable side effects such as respiratory depression, sedation, nausea, vomiting, and pruritus. Transdermal or Iontophoretic Administration Transdermal fentanyl patches are approved for use with sustained pain. The opioid permeates the skin, and a "depot" is established in the stratum corneum layer. Unlike other transdermal systems (i.e., transdermal scopolamine), anatomic position of the patch does not affect absorption. However, fever and external heat sources of heat (heating pads, hot baths) can increase absorption of fentanyl and potentially lead to an overdose (Rose et al., 1993). This modality is well suited for cancer pain treatment because of its ease of use, prolonged duration of action, and stable blood levels (Portenoy et al., 1993). It may take up to 12 hours to develop analgesia and up to 16 hours to observe full clinical effect. Plasma levels stabilize after two sequential patch applications, and these kinetics do not appear to change with repeated applications (Portenoy, 1993). However, there may be a great deal of variability in plasma levels after a given dose. The plasma half-life after patch removal is about 17 hours. Thus, if excessive sedation or respiratory depression is experienced, antagonist infusions may need to be maintained for an extended period (Payne, 1992). Dermatological side effects from the patches, such as rash and itching, usually are mild. Iontophoresis is the transport of soluble ions through the skin by using a mild electric current. This technique has been employed with morphine (Ashburn et al., 1992). Fentanyl and sufentanil have been chemically modified and applied by iontophoresis in rats (Thysman and Preat, 1993). Effective analgesia was achieved in less than 1 hour, suggesting that iontophoresis could be a promising modality for postoperative pain. It should be noted that increasing the applied current will increase drug delivery and could lead to overdose. However, unlike transdermal opioids, a drug reservoir does not build up in the skin, thus limiting the duration of both main and side effects. General Principles of Opioid Use Opioid analgesics provide symptomatic relief of pain, but the underlying disease remains. The physician must weigh the benefits of this relief against any potential risk to the patient, which may be quite different in an acute compared with a chronic disease. In acute problems, opioids will reduce the intensity of pain. However, physical signs (such as abdominal rigidity) will generally remain. Relief of pain also can facilitate history taking, examination, and the patient's ability to tolerate diagnostic procedures. Patients should not be evaluated inadequately because of the physician's unwillingness to prescribe analgesics, nor in most cases should analgesics be withheld for fear of obscuring the progression of underlying disease. The problems that arise in the relief of pain associated with chronic conditions are more complex. Repeated daily administration eventually will produce tolerance and some degree of physical dependence. The degree will depend on the particular drug, the frequency of administration, and the

quantity administered. The decision to control any chronic symptom, especially pain, by the repeated administration of an opioid must be made carefully. When pain is due to chronic, nonmalignant disease, measures other than opioid drugs should be employed to relieve chronic pain if they are effective and available. Such measures include the use of nonsteroidal antiinflammatory agents, local nerve block, antidepressant drugs, electrical stimulation, acupuncture, hypnosis, or behavioral modification (see Foley, 1985). However, highly selected subpopulations of chronic nonmalignant pain patients can be adequately maintained on opioids for extended periods of time (Portenoy, 1990). In the usual doses, morphine-like drugs relieve suffering by altering the emotional component of the painful experience as well as by producing analgesia. Control of pain, especially chronic pain, must include attention to both psychological factors and the social impact of the illness that sometimes play dominant roles in determining the suffering experienced by the patient. In addition to emotional support, the physician also must consider the substantial variability in both the patient's capacity to tolerate pain and the response to opioids. As a result, some patients may require considerably more than the average dose of a drug to experience any relief from pain; others may require dosing at shorter intervals. Some clinicians, out of an exaggerated concern for the possibility of inducing addiction, tend to prescribe initial doses of opioids that are too small or given too infrequently to alleviate pain and then respond to the patient's continued complaints with an even more exaggerated concern about drug dependence, despite the high probability that the request for more drug is only the expected consequence of the inadequate dosage initially prescribed (see Sriwatanakul et al., 1983). It also is important to note that infants and children are probably more apt to receive inadequate treatment for pain than are adults due to communication difficulties, lack of familiarity with appropriate pain assessment methodologies, and inexperience with the use of strong opioids in children. If an illness or procedure causes pain for an adult, there is no reason to assume that it will produce less pain for a child (see Yaster and Deshpande, 1988). Pain of Terminal Illness and Cancer Pain Opioids are not indicated in all cases of terminal illness, but the analgesia, tranquility, and even the euphoria afforded by the use of opioids can make the final days far less distressing for the patient and family. Although physical dependence and tolerance may develop, this possibility should not in any way prevent physicians from fulfilling their primary obligation to ease the patient's discomfort. The physician should not wait until the pain becomes agonizing; no patient should ever wish for death because of a physician's reluctance to use adequate amounts of effective opioids. This may sometimes entail the regular use of opioid analgesics in substantial doses. Such patients, while they may be physically dependent, are not "addicts" even though they may need large doses on a regular basis. Physical dependence is not equivalent to addiction (see Chapter 24: Drug Addiction and Drug Abuse). Most clinicians who are experienced in the management of chronic pain associated with malignant disease or terminal illness recommend that opioids be administered at sufficiently short, fixed intervals so that pain is continually under control and patients do not dread its return (Foley, 1993). Less drug is needed to prevent the recurrence of pain than to relieve it. Morphine remains the opioid of choice in most of these situations, and the route and dose should be adjusted to the needs of the individual patient. Many clinicians find that oral morphine is adequate in most situations. Sustained-release preparations of oral morphine are now available that can be administered at 8- to 12-hour intervals. Superior control of pain often can be achieved with fewer side effects using the same daily dose; a decrease in the fluctuation of plasma concentrations of morphine may be partially responsible.

Constipation is an exceedingly common problem when opioids are used, and the use of stool softeners and laxatives should be initiated early. Amphetamines have demonstrable mood-elevating and analgesic effects and enhance opioid-induced analgesia. However, not all terminal patients require the euphoriant effects of amphetamine, and some experience side effects, such as anorexia. Controlled studies demonstrate no superiority of oral heroin over oral morphine. Similarly, after adjustment is made for potency, parenteral heroin is not superior to morphine in terms of analgesia, effects on mood, or side effects (see Sawynok, 1986). Although tolerance does develop to oral opioids, many patients obtain relief from the same dosage for weeks or months. In cases where one opioid loses effectiveness, switching to another may provide better pain relief. "Cross-tolerance" among opioids exists, but, both clinically and experimentally, cross-tolerance among related receptor agonists is not complete. The reasons for this are unclear, but they may relate to differences between agonists in receptor-binding characteristics and subsequent cellular signaling interactions, as discussed earlier in the chapter. When opioids and other analgesics are no longer satisfactory, nerve block, chordotomy, or other types of neurosurgical intervention such as neurostimulation may be required if the nature of the disease permits. Epidural or intrathecal administration of opioids may be useful when administration of opioids by usual routes no longer yields adequate relief of pain (see above). This technique has been used with ambulatory patients over periods of weeks or months (see Gustafsson and Wiesenfeld-Hallin, 1988). Moreover, portable devices have been developed that permit the patient to control the parenteral administration of an opioid while remaining ambulatory (Kerr et al., 1988). These devices use a pump that infuses the drug from a reservoir at a rate that can be tailored to the needs of the patient, and they include mechanisms to limit dosage and/or allow the patient to self-administer an additional "rescue" dose if there is a transient change in the intensity of pain. Nonanalgesic Therapeutic Uses of Opioids Dyspnea Morphine is used to alleviate the dyspnea of acute left ventricular failure and pulmonary edema, and the response to intravenous morphine may be dramatic. The mechanism underlying this relief still is not clear. It may involve an alteration of the patient's reaction to impaired respiratory function and an indirect reduction of the work of the heart due to reduced fear and apprehension. However, it is more probable that the major benefit is due to cardiovascular effects, such as decreased peripheral resistance and an increased capacity of the peripheral and splanchnic vascular compartments (see Vismara et al., 1976). Nitroglycerin, which also causes vasodilation, may be superior to morphine in this condition (see Hoffman and Reynolds, 1987). In patients with normal blood gases but severe breathlessness due to chronic obstruction of airflow ("pink puffers"), drocode (dihydrocodeine), 15 mg orally before exercise, reduces the feeling of breathlessness and increases exercise tolerance (Johnson et al., 1983). Opioids are relatively contraindicated in pulmonary edema due to respiratory irritants unless severe pain also is present; relative contraindications to the use of histaminereleasing opioids in asthma already have been discussed. Special Anesthesia High doses of morphine or other opioids have been used as the primary anesthetic agents in certain surgical procedures. Although respiration is so depressed that physical assistance is required, patients can retain consciousness (see Chapter 14: General Anesthetics). Prospectus

Great strides are being made in understanding structure-function relationships between opioids and endogenous opioid peptides and their receptors. The complex signaling mechanisms and neural circuitry mediating both the salutary and undesirable effects of opioids also are beginning to be understood. The recent discovery of new receptor-selective endogenous opioid ligands and the opioid-related N/OFQ system also will provide opportunities to improve our understanding of opioid pharmacology and physiology. The development of new opioid analgesics and novel delivery routes are improving the care and quality of life for patients requiring opioids. Over the next several years, pursuing these lines of basic and clinical investigation should provide many valuable insights that may allow better targeting of the therapeutic effects of opioid compounds, thereby minimizing undesirable acute side effects and the potentially serious long-term consequences of tolerance and physical dependence. It also is hoped that these efforts will help overcome the less common, but devastating, problem of addiction. Dedication The authors would like to dedicate this chapter to the memory of Dr. Thomas F. Burks, colleague and friend, who had a major impact on the field of opioid pharmacology. Acknowledgment The authors wish to acknowledge Drs. Terry Reisine and Gavril Pasternak, authors of this chapter in the ninth edition of Goodman and Gilman's The Pharmacological Basis of Therapeutics, some of whose text has been retained in this edition.

Chapter 24. Drug Addiction and Drug Abuse


Overview Drugs are so commonly used and abused in modern society that virtually everyone has some familiarity with the concepts of drug addiction and abuse. The term addiction has entered everyday language and often is used to describe behavior that does not involve drug use. For example, the media speak of "addiction" to sex, running, shopping, or TV. While there certainly can be a superficial resemblance among many varieties of compulsive behavior, there currently is no scientific basis to lump these activities with drug abuse and addiction. These are medical diagnoses with specific criteria that provide the same level of interevaluator reliability as for other medical conditions. Inappropriate use of any drug can be either intentional or inadvertent. Drugs that affect behavior are particularly likely to be taken in excess when the behavioral effects are considered pleasurable. Psychosocial factors tend to be similar for diverse pharmacological agents and are of equal importance in the pathogenesis of these disorders as the unique pharmacological profiles of given drugs. Nevertheless, this chapter focuses on the pharmacological aspects of drug abuse and dependence, including legal prescription drugs, illegal drugs such as heroin or cocaine, and nonprescription drugs such as ethanol and nicotine (see also Chapters 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia, 17: Hypnotics and Sedatives, 18: Ethanol, and 23: Opioid Analgesics). Drug Dependence

There are many misunderstandings about the origins and even the definitions of drug abuse and addiction. Although many physicians are concerned about "creating addicts," very few individuals begin their drug addiction problems by misuse of prescription drugs. Confusion exists because the correct use of prescribed medications for pain, anxiety, and even hypertension commonly produces tolerance and physical dependence. These are normal physiological adaptations to repeated use of drugs from many different categories. Tolerance and physical dependence are explained in more detail later, but it must be emphasized that they do not imply abuse or addiction. This distinction is important, because patients with pain are sometimes deprived of adequate opioid medication simply because they have shown evidence of tolerance and they exhibit withdrawal symptoms if the analgesic medication is abruptly stopped. Definitions Abuse and addiction have been defined and redefined by several organizations over the past 30 years. The reason for these revisions and disagreements is that abuse and addiction are behavioral syndromes that exist along a continuum from minimal use to abuse to addictive use. While tolerance and physical dependence are biological phenomena that can be defined precisely in the laboratory and diagnosed accurately in the clinic, there is an arbitrary aspect to the definitions of the overall behavioral syndromes of abuse and addiction. The most influential system of diagnosis for mental disorders is that published by the American Psychiatric Association (APA) (DSM-IV, 1994). The APA diagnostic system uses the term substance dependence instead of addiction for the overall behavioral syndrome. It also applies the same general criteria to all types of drugs, regardless of their pharmacological class. Although widely accepted, this terminology can lead to confusion between physical dependence and psychological dependence. The term addiction, when used in this chapter, refers to compulsive drug usethe entire substance dependence syndrome as defined in DSM-IV. This should not be confused with physical dependence alone, a common error among physicians. Addiction is not used as a pejorative term but rather for clarity of communication; in fact, the journal Addiction is one of the oldest scientific journals in this therapeutic area. The APA defines substance dependence (addiction) as a cluster of symptoms indicating that the individual continues use of the substance despite significant substance-related problems. Evidence of tolerance and withdrawal symptoms is included in the list of symptoms, but neither tolerance nor withdrawal is necessary or sufficient for a diagnosis of substance dependence. Dependence (addiction) requires three or more of the symptoms, while "abuse" can be diagnosed when only one or two symptoms are present. Origins of Substance Dependence Many variables operate simultaneously to influence the likelihood of any given person becoming a drug abuser or an addict. These variables can be organized into three categories: agent (drug), host (user), and environment (see Table 241). Agent (Drug) Variables Drugs vary in their ability to produce immediate good feelings in the user. Drugs that reliably produce intensely pleasant feelings (euphoria) are more likely to be taken repeatedly. Reinforcement refers to the ability of drugs to produce effects that make the user wish to take them again. The more strongly reinforcing a drug is, the greater the likelihood that the drug will be abused. Reinforcing properties of a drug can be reliably measured in animals. Generally, animals such as rats or monkeys equipped with intravenous catheters connected to lever-regulated pumps will work

to obtain injections of the same drugs in roughly the same order of potency that human beings will. Thus, medications can be screened for their potential for abuse in human beings by the use of animal models. Reinforcing properties of drugs are associated with their ability to increase levels of the neurotransmitters in critical brain areas (see Chapter 12: Neurotransmission and the Central Nervous System). Cocaine, amphetamine, ethanol, opioids, and nicotine all reliably increase extracellular fluid dopamine levels in the nucleus accumbens region. Brain microdialysis permits sampling of extracellular fluid while animals, usually rats, are freely moving or receiving drugs. Smaller increases in dopamine in the nucleus accumbens also are observed when the rat is presented with sweet foods or a sexual partner. In contrast, drugs that block dopamine receptors generally produce bad feelings, i.e., dysphoric effects. Neither animals nor human beings will take such drugs spontaneously. Despite strong correlative findings, a causal relationship between dopamine and euphoria/dysphoria has not been established, and other findings emphasize additional roles of noradrenergic, serotonergic, opioidergic, and GABAergic mechanisms in mediating the reinforcing effects of drugs. The abuse liability of a drug is enhanced by rapidity of onset, since effects that occur soon after administration are more likely to initiate the chain of events that lead to loss of control over drug taking. The pharmacokinetic variables that influence the time it takes the drug to reach critical receptor sites in the brain are explained in more detail in Chapter 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination. The history of cocaine use illustrates the changes in abuse liability of the same compound, depending on the form and the route of administration. Coca leaves can be chewed, and the alkaloidal cocaine is slowly absorbed through the buccal mucosa. This method produces low cocaine blood levels and correspondingly low levels in the brain. The mild stimulant effects produced by the chewing of coca leaves have a gradual onset, and this practice has produced little, if any, abuse or dependence despite use over thousands of years by natives of the Andes mountains. Beginning in the late nineteenth century, scientists isolated cocaine hydrochloride from coca leaves, and the extraction of pure cocaine became possible. Cocaine could be taken in higher doses by oral ingestion (gastrointestinal absorption) or by absorption through the nasal mucosa, producing higher cocaine levels in the blood and a more rapid onset of stimulation. Subsequently, it was found that a solution of cocaine hydrochloride could be administered via the intravenous route, giving the ultimate in rapidity of blood levels and speed of onset of stimulatory effects. Each newly available cocaine preparation that provided greater speed of onset and an increment in blood level was paralleled by a greater likelihood to produce addiction. In the 1980s, the availability of cocaine to the American public was increased further with the invention of crack cocaine. Crack, sold at a very low street price ($1 to $3 per dose), is alkaloidal cocaine (free base) that can be readily vaporized by heating. Simply inhaling the vapors produces blood levels comparable to those resulting from intravenous cocaine due to the large surface area for absorption into the pulmonary circulation following inhalation. The cocaine-containing blood then enters the left side of the heart and reaches the cerebral circulation without dilution by the systemic circulation. Inhalation of crack cocaine is thus much more likely to produce addiction than is chewing, drinking, or sniffing cocaine. This method, which rapidly delivers the drug to the brain, also is the preferred route for users of nicotine and cannabis. Although the drug variables are important, they do not fully explain the development of abuse and addiction. Most people who experiment with drugs that have a high risk of producing addiction (addiction liability) do not intensify their drug use and lose control. The risk for developing addiction among those who try nicotine is about twice that for those who try cocaine (Table 242),

but this does not imply that the pharmacological addiction liability of nicotine is twice that of cocaine. Rather there are other variables listed in the categories of host factors and environmental conditions that influence the development of addiction. Host (User) Variables In general, effects of drugs vary among individuals. Even blood levels show wide variation when the same dose of a drug on a milligram-per-kilogram basis is given to different people. Polymorphism of the genes that encode enzymes involved in absorption, metabolism, and excretion and in receptor-mediated responses may contribute to the different degrees of reinforcement or euphoria observed among individuals. Children of alcoholics show an increased likelihood of developing alcoholism, even when adopted at birth and raised by nonalcoholic parents (Schuckit, 1999). The studies of genetic influences in this disorder show only an increased risk for developing alcoholism, not a 100% determinism, and this is consistent with a polygenic disorder that has multiple determinants. Even identical twins, who share the same genetic endowment, do not have 100% concordance when one twin is alcoholic. However, the concordance rate for identical twins is much higher than that for fraternal twins. Also of interest is the observation that alcohol and other drug abuse tend to run in the same families, giving rise to postulates that common mechanisms may be involved. Innate tolerance to alcohol may represent a biological trait that contributes to the development of alcoholism. Data from a longitudinal study (Schuckit and Smith, 1996) show that sons of alcoholics have reduced sensitivity to alcohol when compared to other young men of the same age (22 years old) and drinking histories. Sensitivity to alcohol was measured by measuring the effects of two different doses of alcohol in the laboratory on motor performance and subjective feelings of intoxication. When the men were reexamined 10 years later, those who had been most tolerant (insensitive) to alcohol at age 22 were the most likely to be diagnosed as alcohol-dependent at age 32. The presence of tolerance predicted the development of alcoholism even in the group without a family history of alcoholism, but there were far fewer tolerant men in the group with a negative family history. Differences in alcohol metabolism also may influence the propensity for alcohol abuse. Ethanol is metabolized by alcohol dehydrogenase with the production of acetaldehyde, which is then metabolized by a mitochondrial aldehyde dehydrogenase known as ALDH2. A common mutation occurs in the gene for ALDH2, resulting in a less effective aldehyde dehydrogenase. This allele has a high frequency in Asian populations and results in an excess production of acetaldehyde after the ingestion of alcohol. Those who are heterozygous for this allele experience a very unpleasant facial flushing reaction 5 to 10 minutes after ingesting alcohol; the reaction is even more severe in individuals homozygous for the allele, and this genotype has not been found in alcoholics (Higuchi et al., 1996). Similarly, individuals who inherit the gene for impaired nicotine metabolism have been found to have a lower probability of becoming nicotine-dependent. (Pianezza et al., 1998). Psychiatric disorders constitute another category of host variables. Drugs may produce immediate, subjective effects that relieve preexisting symptoms. People with anxiety, depression, insomnia, or even subtle symptoms such as shyness may find, on experimentation or by accident, that certain drugs give them relief. However, the apparent beneficial effects are transient, and repeated use of the drug may lead to tolerance and eventually compulsive, uncontrolled drug use. While psychiatric symptoms commonly are seen in drug abusers presenting for treatment, most of these symptoms started after the person began abusing drugs. Thus, drugs of abuse appear to produce more

psychiatric symptoms than they relieve. Environmental Variables Initiating and continuing illicit drug use appear to be significantly influenced by societal norms and peer pressure. Taking drugs may be seen initially as a form of rebellion against authority. In some communities, drug users and drug dealers are role models who seem to be successful and respected; thus, young people emulate them. There also may be a paucity of other options for pleasure or diversion. These factors are particularly important in communities where educational levels are low and job opportunities scarce. Pharmacological Phenomena Tolerance While abuse and addiction are extremely complicated conditions combining the many variables outlined above, there are a number of relevant pharmacological phenomena that occur independently of social and psychological dimensions. First are the changes in the way the body responds to a drug with repeated use. Tolerance is the most common response to repetitive use of the same drug and can be defined as the reduction in response to the drug after repeated administrations. Figure 241 shows an idealized doseresponse curve for an administered drug. As the dose of the drug increases, the observed effect of the drug increases. With repeated use of the drug, however, the curve shifts to the right (tolerance). Thus a higher dose is required to produce the same effect that was once obtained at a lower dose. Diazepam, for example, typically produces sedation at doses of 5 to 10 mg in a first-time user, but those who repeatedly use it to produce a kind of "high" may become tolerant to doses of several hundreds of milligrams; some abusers have had documented tolerance to <1000 mg/day. As outlined in Table 243, there are many forms of tolerance, likely arising via multiple mechanisms. Figure 241. Shifts in a DoseResponse Curve with Tolerance and Sensitization. With tolerance, there is a shift of the curve to the right such that doses higher than initial doses are required to achieve the same effects. With sensitization, there is a leftward shift of the doseresponse curve such that, for a given dose, there is a greater effect than seen after the initial dose.

Tolerance develops to some drug effects much more rapidly than to other effects of the same drug. For example, tolerance develops rapidly to the euphoria produced by opioids such as heroin, and addicts tend to increase their dose in order to reexperience that elusive "high." In contrast, tolerance to the gastrointestinal effects of opiates develops more slowly. The discrepancy between tolerance to euphorigenic effects and tolerance to effects on vital functions, such as respiration and blood

pressure, can lead to potentially fatal accidents in sedative abusers. Innate tolerance refers to genetically determined sensitivity (or lack of sensitivity) to a drug that is observed the first time that the drug is administered. Innate tolerance is discussed above as a host variable that influences the development of abuse or addiction. Acquired tolerance can be divided into three types: pharmacokinetic, pharmacodynamic, and learned tolerance, including a form of behavioral tolerance referred to as conditioned tolerance. Pharmacokinetic or dispositional tolerance refers to changes in the distribution or metabolism of the drug after repeated drug administration, such that reduced concentrations are present in the blood and subsequently at the sites of drug action (see Chapter 1: Pharmacokinetics: The Dynamics of Drug Absorption, Distribution, and Elimination). The most common mechanism is an increase in the rate of metabolism of the drug. For example, barbiturates stimulate the production of higher levels of hepatic microsomal enzymes, causing more rapid removal and breakdown of barbiturates from the circulation. Since the same enzymes metabolize many other drugs, they too are metabolized more quickly. This results in a decrease in their plasma levels as well and thus a reduction in their effects. Pharmacodynamic tolerance refers to adaptive changes that have taken place within systems affected by the drug, so that response to a given concentration of the drug is reduced. Examples include drug-induced changes in receptor density or efficiency of receptor coupling to signal transduction pathways (see Chapter 2: Pharmacodynamics: Mechanisms of Drug Action and the Relationship Between Drug Concentration and Effect). Learned tolerance refers to a reduction in the effects of a drug due to compensatory mechanisms that are learned. One type of learned tolerance is called behavioral tolerance. This simply describes the skills that can be developed through repeated experiences with attempting to function despite a state of mild to moderate intoxication. A common example is learning to walk a straight line in spite of the motor impairment produced by alcohol intoxication. This probably involves both acquisition of motor skills and the learned awareness of one's deficit, causing the person to walk more carefully. At higher levels of intoxication, behavioral tolerance is overcome, and the deficits are obvious. A special case of behavioral tolerance is referred to as conditioned tolerance. Conditioned tolerance (situation-specific tolerance) is a learning mechanism that develops when environmental cues such as sights, smells, or situations consistently are paired with the administration of a drug. When a drug affects homeostatic balance by producing sedation and changes in blood pressure, pulse rate, gut activity, etc., there is usually a reflexive counteraction or adaptation that attempts to maintain the status quo. If a drug always is taken in the presence of specific environmental cues (smell of drug preparation, sight of syringe), these cues begin to predict the appearance of the drug. Then the adaptations begin to occur even before the drug reaches its sites of action. If the drug always is preceded by the same cues, the adaptive response to the drug will be learned, and this will prevent the full manifestation of the drug's effects (tolerance). This mechanism of conditioned tolerance production follows classical (Pavlovian) principles of learning and results in drug tolerance being evident under circumstances where the drug is "expected." When the drug is received under novel or "unexpected" circumstances, tolerance is reduced and drug effects are enhanced (Wikler, 1973; Siegel 1976). The term acute tolerance refers to rapid tolerance developing with repeated use on a single occasion such as in a "binge." For example, cocaine often is used in a binge, with repeated doses over one to

several hours, sometimes longer. Under binge dosing, there will be a decrease in response to subsequent doses of cocaine during the binge. This is the opposite of sensitization, observed with an intermittent dosing schedule, described below. Sensitization With stimulants such as cocaine or amphetamine, reverse tolerance or sensitization can occur. This refers to an increase in response with repetition of the same dose of the drug. Sensitization results in a shift to the left of the doseresponse curve, as illustrated schematically in Figure 241. For example, with repeated daily administration to rats of a dose of cocaine that produces increased motor activity, the effect increases over several days, even though the dose remains constant. A conditioned response also can be a part of sensitization to cocaine. Simply putting a rat into a cage where cocaine is expected or giving a placebo injection after several days of receiving cocaine under the same circumstances produces an increase in motor activity as though cocaine actually were giveni.e., a conditioned response. Sensitization, in contrast to acute tolerance during a binge, requires a longer interval between doses, usually about a day. Sensitization has been studied in rats equipped with microdialysis cannulae for monitoring extracellular dopamine (Kalivas and Duffy, 1990; see Figure 242). The initial response to 10 mg/kg of cocaine administered intraperitoneally is an increase in measured dopamine levels. After seven daily injections, the dopamine increase is significantly greater than on the first day, and the behavioral response also is greater. Figure 242 also provides an example of a conditioned response (learned drug effect), since injection of saline produced both an increase in dopamine levels and an increase in behavioral activity when it was administered 3 days after cocaine injections had stopped. Little research on sensitization has been conducted in human subjects, but the results suggest that the phenomenon can occur. It has been postulated that stimulant psychosis results from a sensitized response after long periods of use. Figure 242. Changes in Dopamine Detected in the Extracellular Fluid of the Nucleus Accumbens of Rats after Daily Intraperitoneal Cocaine Injections (10 mg/kg). The first injection produces a modest increase and the last, after 7 days, produces a much greater increase in dopamine release. Note that whereas the first saline injection produces no effect on dopamine levels, the second, given 3 days after 7 days of cocaine injections, produces a significant rise in dopamine, presumably due to conditioning. (Adapted from Kalivas and Duffy, 1990, with permission.)

Cross-Tolerance Cross-tolerance refers to the fact that repeated use of drugs in a given category confers tolerance not only to the drug being used but also to other drugs in the same structural and mechanistic category. Understanding cross-tolerance is important in the medical management of persons dependent on any drug. Detoxification is a form of treatment for drug dependence that involves giving gradually decreasing doses of the drug to prevent withdrawal symptoms, thereby weaning the patient from the drug of dependence (see below). Detoxification can be accomplished with any medication that produces cross-tolerance to the initial drug of dependence. For example, users of heroin also are tolerant to other opioids. Thus the detoxification of heroin-dependent patients can be accomplished with any medication that activates opiate receptors (opioid drug; see Chapter 23: Opioid Analgesics). Physical Dependence Physical dependence is a state that develops as a result of the adaptation (tolerance) produced by a resetting of homeostatic mechanisms in response to repeated drug use. Drugs can affect numerous systems that previously were in equilibrium; these systems must find a new balance in the presence of inhibition or stimulation by a specific drug. A person in this adapted or physically dependent state requires continued administration of the drug to maintain normal function. If administration of the drug is stopped abruptly, there is another imbalance, and the affected systems must again go through a process of readjusting to a new equilibrium without the drug. Withdrawal Syndrome The appearance of a withdrawal syndrome when administration of the drug is terminated is the only actual evidence of physical dependence. Withdrawal signs and symptoms occur when drug administration in a physically dependent person is abruptly terminated. Withdrawal symptoms have at least two origins: (1) removal of the drug of dependence, and (2) central nervous system hyperarousal due to readaptation to the absence of the drug of dependence. Pharmacokinetic variables are of considerable importance in the amplitude and duration of the withdrawal syndrome. Withdrawal symptoms are characteristic for a given category of drugs, and they tend to be opposite to the original effects produced by the drug before tolerance developed. Thus, a drug (such as an opioid agonist) that produces meiotic (constricted) pupils and slow heart rate will result in dilated pupils and tachycardia when it is withdrawn from a dependent person. Tolerance, physical dependence, and withdrawal are all biological phenomena. They are the natural consequences of drug use. They can be produced in experimental animals and in any human being who takes certain medications repeatedly. These symptoms in themselves do not imply that the individual is involved in abuse or addiction. Patients who take medicine for appropriate medical indications and in the correct dose still may show tolerance, physical dependence, and withdrawal symptoms if the drug is stopped abruptly rather than gradually. For example, a hypertensive patient receiving a -adrenergic receptor blocker such as propranolol may have a good therapeutic response but, if the drug is stopped abruptly, may experience a withdrawal syndrome consisting of rebound increased blood pressure temporarily higher than that prior to beginning the medication. "Medical addict" is a term used to describe a patient in treatment for a medical disorder who has become "addicted" to the available prescribed drugs; the patient begins taking them in excessive doses, out of control. An example would be a patient with chronic pain, anxiety, or insomnia who begins using the prescribed medication more often than directed by the physician. If the physician

restricts the prescriptions, the patient may begin seeing several doctors without the knowledge of the primary physician. Such patients also may visit emergency rooms for the purpose of obtaining additional medication. This scenario rarely occurs, considering the large number of patients who receive medications capable of producing tolerance and physical dependence. Fear of producing such medical addicts results in needless suffering among patients with pain, as physicians needlessly limit appropriate medications. Tolerance and physical dependence are inevitable consequences of chronic treatment with opioids and certain other drugs, but tolerance and physical dependence, by themselves, do not imply "addiction." Clinical Issues The treatment of physically dependent individuals is discussed below with reference to the specific drug of abuse and dependence problems characteristic to each category: central nervous system (CNS) depressants, including alcohol and sedatives; nicotine and tobacco; opioids; psychostimulants, such as amphetamine and cocaine; cannabinolds; psychedelic drugs; and inhalants (volatile solvents, nitrous oxide, ethyl ether). Abuse of combinations of drugs across these categories is common. Alcohol is such a widely available drug that it is combined with practically all other categories. Some combinations reportedly are taken because of their interactive effects. An example is the combination of heroin and cocaine ("speedball"), which is described with the opioid category. When confronted with a patient exhibiting signs of overdose or withdrawal, the physician must be aware of these possible combinations, because each drug may require specific treatment. Central Nervous System Depressants Ethanol The use of ethyl alcohol prepared from the fermentation of sugars, starches, or other carbohydrates dates back as early as recorded history. Experimentation with ethanol is almost universal, and a high proportion of users find the experience pleasant. Approximately 70% of American adults occasionally consume ethanol (commonly called alcohol), and the lifetime prevalence of alcohol abuse and alcohol addiction (alcoholism) in this society is 5% to 10% for men and 3% to 5% for women. Ethanol is classed as a depressant because it indeed produces sedation and sleep. However, the initial effects of alcohol, particularly at lower doses, often are perceived as stimulation due to a suppression of inhibitory systems (see Chapter 18: Ethanol). Those who perceive only sedation from alcohol tend to choose not to drink when evaluated in a test procedure (de Wit et al., 1989). Alcohol impairs recent memory and, in high doses, produces the phenomenon of "blackouts," after which the drinker has no memory of his or her behavior while intoxicated. The effects of alcohol on memory are unclear (Mello, 1973), but evidence suggests that reports from patients about their reasons for drinking and their behavior during a binge are not reliable. Alcohol-dependent persons often say that they drink to relieve anxiety or depression. When allowed to drink under observation, however, alcoholics typically become more dysphoric as drinking continues (Mendelson and Mello, 1979), thus contradicting the tension-reduction explanation. Tolerance, Physical Dependence, and Withdrawal Mild intoxication by alcohol is familiar to almost everyone, but the symptoms vary among individuals. Some simply experience motor incoordination and sleepiness. Others initially become stimulated and garrulous. As the blood level increases, the sedating effects increase, with eventual

coma and death at high alcohol levels. The initial sensitivity (innate tolerance) to alcohol varies greatly among individuals and is related to family history of alcoholism (Schuckit and Smith, 1997). Experience with alcohol can produce greater tolerance (acquired tolerance), such that extremely high blood levels (300 to 400 mg/dl) can be found in alcoholics who do not appear grossly sedated. In these cases, the lethal dose does not increase proportionately to the sedating dose, and thus the margin of safety (therapeutic index) is decreased. Heavy consumers of alcohol not only acquire tolerance but also inevitably develop a state of physical dependence. This often leads to drinking in the morning to restore blood alcohol levels diminished during the night. Eventually they may awaken during the night and take a drink to avoid the restlessness produced by falling alcohol levels. The alcohol withdrawal syndrome (Table 244) generally depends on the size of the average daily dose and usually is "treated" by resumption of alcohol ingestion. Withdrawal symptoms are experienced frequently, but they usually are not severe or life threatening until they occur in conjunction with other problems, such as infection, trauma, malnutrition, or electrolyte imbalance. In the setting of such complications, the syndrome of delirium tremens becomes likely (see Table 244). Alcohol produces cross-tolerance to other sedatives such as benzodiazepines. This tolerance is operative in abstinent alcoholics, but while the alcoholic is drinking, the sedating effects of alcohol add to those of other drugs, making the combination more dangerous. This is particularly true for benzodiazepines, which are relatively safe in overdose when given alone but potentially are lethal in combination with alcohol. The chronic use of alcohol as well as that of other sedatives is associated with the development of depression (McLellan et al., 1979), and the risk of suicide among alcoholics is one of the highest of any diagnostic category. Cognitive deficits have been reported in alcoholics tested while sober. These deficits usually improve after weeks to months of abstinence (Grant, 1987). More severe recent memory impairment is associated with specific brain damage caused by nutritional deficiencies, which are common in alcoholics. Alcohol is toxic to many organ systems. As a result, the medical complications of alcohol abuse and dependence include liver disease, cardiovascular disease, endocrine and gastrointestinal effects, and malnutrition, in addition to the CNS dysfunctions outlined above (see Chapter 18: Ethanol). Ethanol readily crosses the placental barrier, producing the fetal alcohol syndrome, a major cause of mental retardation (see Chapter 18: Ethanol). Pharmacological Interventions Detoxification A patient who presents in a medical setting with an alcohol-withdrawal syndrome should be considered to have a potentially lethal condition. Although most mild cases of alcohol withdrawal never come to medical attention, severe cases require general evaluation; attention to hydration and electrolytes; vitamins, especially high-dose thiamine; and a sedating medication that has crosstolerance with alcohol. A short-acting benzodiazepine such as oxazepam can be given at doses sufficient to block or diminish the symptoms described in Table 244; some authorities recommend a long-acting benzodiazepine unless there is demonstrated liver impairment. Anticonvulsants such as carbamazepine have been shown to be effective in alcohol withdrawal, although they appear not to relieve subjective symptoms as well as benzodiazepines. After medical evaluation, uncomplicated alcohol withdrawal can be treated effectively on an outpatient basis (Hayashida et

al., 1989). When there are medical problems or a history of seizures, hospitalization is required. Other Measures Detoxification is only the first step of treatment. Complete abstinence is the objective of long-term treatment, and this is accomplished mainly by behavioral approaches. Medications that aid in this process are being sought. Disulfiram (see Chapter 18: Ethanol) has been useful in some programs that focus behavioral efforts on the ingestion of the medication. Disulfiram blocks the metabolism of alcohol, resulting in the accumulation of acetaldehyde, which produces an unpleasant flushing reaction when alcohol is ingested. Knowledge of this unpleasant reaction helps the patient resist taking a drink. Although quite effective pharmacologically, disulfiram has not been found to be effective in controlled clinical trials, because so many patients failed to ingest the medication. Another FDA-approved medication used as an adjunct in the treatment of alcoholism is naltrexone (see Chapter 18: Ethanol). This opiate receptor antagonist appears to block some of the reinforcing properties of alcohol and has resulted in a decreased rate of relapse to alcohol drinking in several double-blind clinical trials. It works best in combination with behavioral treatment programs that encourage adherence to medication and to remaining abstinent from alcohol. Benzodiazepines and Other Nonalcohol Sedatives Benzodiazepines are among the most commonly prescribed drugs worldwide; they are used mainly for the treatment of anxiety disorders and insomnia (Chapters 17: Hypnotics and Sedatives and 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders). Considering their widespread use, intentional abuse of prescription benzodiazepines is relatively rare. When a benzodiazepine is taken for up to several weeks, there is little tolerance and no difficulty in stopping the medication when the condition no longer warrants its use. After several months, the proportion of patients who become tolerant increases, and reducing the dose or stopping the medication produces withdrawal symptoms (Table 245). It can be difficult to distinguish withdrawal symptoms from the reappearance of the anxiety symptoms that caused the benzodiazepine to be prescribed initially. Some patients may increase their dose over time, because tolerance definitely develops to the sedative effects. Many patients and their physicians, however, contend that antianxiety benefits continue to occur long after tolerance to the sedating effects. Moreover, these patients continue to take the medication for years according to medical directions without increasing their dose and are able to function very effectively as long as they take the benzodiazepine. The degree to which tolerance develops to the anxiolytic effects of benzodiazepines is a subject of controversy (Lader and File, 1987). There is, however, good evidence that significant tolerance does not develop to all benzodiazepine actions, because some effects of acute doses on memory persist in patients who have taken benzodiazepines for years (Lucki et al., 1986). The American Psychiatric Association formed a task force that reviewed the issues and published guidelines on the proper medical use of benzodiazepines (American Psychiatric Association, 1990). Intermittent use when symptoms occur retards the development of tolerance and is, therefore, preferable to daily use. Patients with a history of alcohol or other drug abuse problems have an increased risk for the development of benzodiazepine abuse and should rarely, if ever, be treated with benzodiazepines on a chronic basis. While relatively few patients who receive benzodiazepines for medical indications abuse their medication, there are individuals who specifically seek benzodiazepines for their ability to produce a "high." Among these abusers, there are differences in drug popularity, with those benzodiazepines that have a rapid onset, such as diazepam and alprazolam, tending to be the most desirable. The drugs may be obtained by simulating a medical condition and deceiving physicians or simply through illicit channels. Street drug dealers provide benzodiazepines in most major cities at a

relatively low cost. Such unsupervised use can lead to the self-administration of huge quantities of such drugs and therefore tolerance to the benzodiazepine's sedating effects. For example, while 5 to 20 mg/day of diazepam is a typical dose for a patient receiving prescribed medication, abusers may take over 1000 mg/day and not appear grossly sedated. Abusers may combine benzodiazepines with other drugs to increase the effect. For example, it is part of the "street lore" that taking diazepam 30 minutes after an oral dose of methadone will produce an augmented high that is not obtainable with either drug alone. While there is some illicit use of benzodiazepines as a primary drug of abuse, most of the nonsupervised use seems to be by abusers of other drugs who are attempting to self-medicate the side effects or withdrawal effects of their primary drug of abuse. Thus, cocaine addicts often take diazepam to relieve the irritability and agitation produced by cocaine binges, and opioid addicts find that diazepam and other benzodiazepines relieve some of the anxiety symptoms of opioid withdrawal when they are unable to obtain their preferred drug. Pharmacological Interventions If patients receiving long-term benzodiazepine treatment by prescription wish to stop their medication, the process may take months of gradual dose reduction. Symptoms as listed in Table 245 may occur during this outpatient detoxification, but in most cases the symptoms are mild. If anxiety symptoms return, a nonbenzodiazepine such as buspirone may be prescribed, but this agent usually is less effective than benzodiazepines for treatment of anxiety in these patients. Some authorities recommend transferring the patient to a long-half-life benzodiazepine during detoxification; other medications recommended include the anticonvulsants carbamazepine and phenobarbital. Controlled studies comparing different treatment regimens are lacking. Since patients who have been on low doses of benzodiazepines for years usually have no adverse effects, the physician and patient should jointly decide whether detoxification and possible transfer to a new anxiolytic is worth the effort. The specific benzodiazepine receptor antagonist flumazenil has been found useful in the treatment of overdose and in reversing the effects of long-acting benzodiazepines used in anesthesia (see Chapter 17: Hypnotics and Sedatives). It has been tried in the treatment of persistent withdrawal symptoms after cessation of long-term benzodiazepine treatment. Deliberate abusers of high doses of benzodiazepines usually require inpatient detoxification. Frequently, benzodiazepine abuse is part of a combined dependence involving alcohol, opioids, and cocaine. Detoxification can be a complex clinical pharmacological problem, requiring knowledge of the pharmacokinetics of each drug. The patient's history may not be reliable, not simply because of lying but also because the patient frequently does not know the true identity of drugs purchased on the street. Medication for detoxification should not be prescribed by the "cookbook" approach but by careful titration and patient observation. The withdrawal syndrome from diazepam, for example, may not become evident until the patient develops a seizure in the second week of hospitalization. One approach to complex detoxification is to focus on the CNS-depressant drug and temporarily hold the opioid component constant with a low dose of methadone. Opioid detoxification can begin later. A long-acting benzodiazepine, such as diazepam or clorazepate, or a long-acting barbiturate, such as phenobarbital, can be used to block the sedative withdrawal symptoms. The phenobarbital dose should be determined by a series of test doses and subsequent observations to determine the level of tolerance. Most complex detoxifications can be accomplished using this phenobarbital loading-dose strategy (see Robinson et al., 1981).

After detoxification, the prevention of relapse requires a long-term outpatient rehabilitation program similar to that for the treatment of alcoholism. No specific medications have been found to be useful in the rehabilitation of sedative abusersbut, of course, specific psychiatric disorders such as depression or schizophrenia, if present, require appropriate medications. Barbiturates and Nonbenzodiazepine Sedatives The use of barbiturates and other nonbenzodiazepine sedating medications has declined greatly in recent years due to the increased safety and efficacy of the newer medications. Abuse problems with barbiturates resemble those seen with benzodiazepines in many ways. Treatment of abuse and addiction should be handled similarly to interventions for the abuse of alcohol and benzodiazepines. Because drugs in this category frequently are prescribed as hypnotics for patients complaining of insomnia, the physician should be aware of the problems that can develop when the hypnotic agent is withdrawn. Insomnia rarely should be treated with medication as a primary disorder except when produced by short-term stressful situations. Insomnia often is a symptom of an underlying chronic problem, such as depression, or may be due simply to a change in sleep requirements with age. Prescription of sedative medications, however, can change the physiology of sleep, with subsequent tolerance to these medication effects. When the sedative is stopped, there is a rebound effect (Kales et al., 1979). This medication-induced insomnia requires detoxification by gradual dose reduction. Nicotine The basic pharmacology of nicotine is discussed in Chapter 9: Agents Acting at the Neuromuscular Junction and Autonomic Ganglia. Nicotine has complex effects that result in its self-administration. Because nicotine provides the reinforcement for the smoking of cigarettes, the most common cause of preventable death and disease in the United States, it is arguably the most dangerous dependenceproducing drug. The dependence produced by nicotine can be extremely durable, as exemplified by the high failure rate among smokers who try to quit. Although more than 80% of smokers express a desire to quit, only 35% try to stop each year, and fewer than 5% are successful in unaided attempts to quit (American Psychiatric Association, 1994). Cigarette (nicotine) addiction is influenced by multiple variables. Nicotine itself produces reinforcement; users compare nicotine to stimulants such as cocaine or amphetamine, although its effects are of lower magnitude. While there are many casual users of alcohol and cocaine, few individuals who smoke cigarettes smoke a small enough quantity (five cigarettes or fewer per day) to avoid dependence. Nicotine is absorbed readily through the skin, mucous membranes, and, of course, the lungs. The pulmonary route produces discernible CNS effects in as little as seven seconds. Thus, each puff produces some discrete reinforcement. With 10 puffs per cigarette, the one-pack-per-day smoker reinforces the habit 200 times daily. The timing, setting, situation, and preparation all become associated repetitively with the effects of nicotine. Nicotine has both stimulant and depressant actions. The smoker feels alert, yet there is some muscle relaxation. Nicotine activates the nucleus accumbens reward system in the brain, discussed earlier; increased extracellular dopamine has been found in this region after nicotine injections in rats. Nicotine affects other systems as well, including the release of endogenous opioids and glucocorticoids. There is evidence for tolerance to the subjective effects of nicotine. Smokers typically report that the first cigarette of the day after a night of abstinence gives the "best" feeling. Smokers who return to cigarettes after a period of abstinence may experience nausea if they return immediately to their

previous dose. Persons naive to the effects of nicotine will experience nausea at low nicotine blood levels, and smokers will experience nausea if nicotine levels are raised above their accustomed levels. Negative reinforcement refers to the benefits obtained from the termination of an unpleasant state. In dependent smokers, there is evidence that the urge to smoke correlates with a low nicotine blood level, as though smoking were a means to achieve a certain nicotine level and thus avoid withdrawal symptoms. Some smokers even awaken during the night to have a cigarette, which ameliorates the effect of low nicotine blood levels that could disrupt sleep. If the nicotine level is maintained artificially by a slow intravenous infusion, there is a decrease in the number of cigarettes smoked and in the number of puffs (Russell, 1987). Thus, smokers may be smoking to achieve the reward of nicotine effects, to avoid the pain of nicotine withdrawal or, most likely, a combination of the two. Nicotine withdrawal symptoms are listed in Table 246. Depressed mood (dysthymic disorder, affective disorder) is associated with nicotine dependence, but it is not known whether depression predisposes one to begin smoking or depression develops during the course of nicotine dependence. Depression significantly increases during smoking withdrawal, and this is cited as one reason for relapse. Pharmacological Interventions The nicotine withdrawal syndrome can be alleviated by nicotine replacement therapy, available without a prescription. Figure 243 shows the blood nicotine concentrations achieved by different methods of nicotine delivery. Because nicotine gum and a nicotine patch do not achieve the peak levels seen with cigarettes, they do not produce the same magnitude of subjective effects as nicotine. These methods do, however, suppress the symptoms of nicotine withdrawal. Thus, smokers should be able to transfer their dependence to the alternative delivery system and gradually reduce the daily nicotine dose with minimal symptoms. Although this results in more smokers achieving abstinence, most resume smoking over the ensuing weeks or months. Comparisons with placebo treatment show large benefits of nicotine replacement at six weeks, but the effect diminishes with time. The nicotine patch produces a steady blood level (Figure 243) and seems to have better patient compliance than that observed with nicotine gum. Verified abstinence rates at 12 months are reported to be in the range of 20%, which is worse than the success rate for any other addiction. The goal of complete abstinence rather than significant reduction is necessary for success; when ex-smokers "slip" and begin smoking a little, they usually relapse quickly to their prior level of dependence. Bupropion, an antidepressant (see Chapter 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders), has been found to improve abstinence rates among smokers. Some smokers report that it reduces their craving for cigarettes, and controlled studies show reduced relapse in smokers randomized to this medication. The best results are in smokers receiving both nicotine patch and bupropion. Behavioral treatment in combination with medication is considered the treatment of choice. Figure 243. Nicotine Concentrations in Blood Resulting from Five Different Nicotine Delivery Systems. Shaded areas indicate the periods of exposure to nicotine. The arrows in the lower panel indicate when the nicotine patch was put on and taken off. (Adapted from Benowitz et al., 1988, and Srivastava et al., 1991, with permission.)

Opioids Opioid drugs are used primarily for the treatment of pain (see Chapter 23: Opioid Analgesics). Some of the CNS mechanisms that reduce the perception of pain also produce a state of well-being or euphoria. Thus, opioid drugs also are taken outside of medical channels for the purpose of obtaining the effects on mood. This potential for abuse has generated much research on separating the mechanism of analgesia from that of euphoria in the hope of eventually developing a potent analgesic that does not activate brain reward systems. Although this research has led to advances in understanding the physiology of pain, the standard medications for severe pain remain the derivatives of the opium poppy (opiates) and synthetic drugs that activate the same receptors (opioids). Drugs modeled after the endogenous opioid peptides may one day provide more specific treatment, but none of these currently is available for clinical use. Medications that do not act at opiate receptors, such as the nonsteroidal antiinflammatory drugs, have an important role in certain types of pain, especially chronic pain; but for acute pain and for severe chronic pain, the opioid drugs are the most effective. A recent development in pain control stems from a greater understanding of the mechanism of tolerance to "mu" ( )-opioid receptormediated analgesia, which involves N-methyl-D-aspartate (NMDA) receptors (Trujillo and Akil, 1991). By combining morphine with dextromethorphan, an

NMDA receptor antagonist, tolerance is impaired and analgesia is enhanced without an increase in the dose of opioid. The subjective effects of opioid drugs are useful in the management of acute pain. This is particularly true in high-anxiety situations, such as the crushing chest pain of a myocardial infarction, when the relaxing, anxiolytic effects complement the analgesia. Normal volunteers with no pain given opioids in the laboratory may report the effects as unpleasant because of the side effects, such as nausea, vomiting, and sedation. Patients with pain rarely develop abuse or addiction problems. Of course, patients receiving opioids develop tolerance routinely, and if the medication is stopped abruptly, they will show the signs of an opioid withdrawal syndrome, the evidence for physical dependence. Opioids should never be withheld from patients with cancer out of fear of producing addiction. If chronic opioid medication is indicated, it is preferable to prescribe an orally active, slow-onset opioid with a long duration of action. These qualities reduce the likelihood of producing euphoria at onset of withdrawal symptoms as the medication wears off. Methadone is an excellent choice for the management of chronic severe pain. Controlled-release, oral morphine (MS CONTIN , others) or controlled-release oxycodone (OXYCONTIN) are other possibilities. Rapid-onset, short-duration opioids are excellent for acute, short-term use, such as during the postoperative period. As tolerance and physical dependence develop, however, the patient may experience the early symptoms of withdrawal between doses, and during withdrawal, the threshold for pain decreases. Thus, for chronic administration, the long-acting opioids are recommended. The major risk for abuse or addiction occurs in patients complaining of pain with no clear physical explanation or with evidence of a chronic disorder that is not life-threatening. Examples are chronic headaches, backaches, abdominal pain, or peripheral neuropathy. Even in these cases, an opioid might be considered as a brief emergency treatment, but long-term treatment with opioids should be used only after other alternatives have been exhausted. In those relatively rare patients who develop abuse, the transition from legitimate use to abuse often begins with patients returning to their physician earlier than scheduled to get a new prescription or visiting emergency rooms of different hospitals complaining of acute pain and asking for an opioid injection. Heroin is the most important opioid drug that is abused. There is no legal supply of heroin for clinical use in the United States. Some claim that heroin has unique analgesic properties for the treatment of severe pain, but double-blind trials have found it to be no more effective than hydromorphone. However, heroin is widely available on the illicit market, and its price dropped sharply in the 1990s while its purity increased tenfold. For many years, heroin purchased on the streets in the United States was highly diluted. Each 100-mg bag of powder had only about 4 mg of heroin (range 0 to 8 mg), and the rest was inert or sometimes toxic adulterants such as quinine. In the mid-1990s, street heroin reached 45% to 75% purity in many large cities, with some samples testing as high as 90%. This means that the level of physical dependence among heroin addicts is relatively high and that users who interrupt regular dosing will develop more severe withdrawal symptoms. Whereas heroin previously required intravenous injection, the more potent supplies can be smoked or administered nasally (snorted), thus making the initiation of heroin use accessible to people who would not insert a needle into their veins. There is no accurate way to count the number of heroin addicts, but based on extrapolation from overdose deaths, number of applicants for treatment, and number of heroin addicts arrested, the estimates range from 800,000 to 1 million. In national surveys, approximately three adults report having tried heroin for every one who became addicted to the drug.

Tolerance, Dependence, and Withdrawal Injection of a heroin solution produces a variety of sensations described as warmth, taste, or high and intense pleasure ("rush") often compared to sexual orgasm. There are some differences among the opioids in their acute effects, with morphine producing more of a histamine-releasing effect and meperidine producing more excitation or confusion. Even experienced opioid addicts, however, cannot distinguish between heroin and hydromorphone in double-blind tests. Thus, the popularity of heroin may be due to its availability on the illicit market and its rapid onset. After intravenous injection, the effects begin in less than a minute. Heroin has high lipid solubility, crosses the bloodbrain barrier quickly, and is deacetylated to the active metabolites, 6-monoacetyl morphine and morphine. After the intense euphoria, which lasts from 45 seconds to several minutes, there is a period of sedation and tranquillity ("on the nod") lasting up to an hour. The effects of heroin wear off in 3 to 5 hours, depending on the dose. Experienced users may inject two to four times per day. Thus, the heroin addict is constantly oscillating between being "high" and feeling the sickness of early withdrawal (Figure 244). This produces many problems in the homeostatic systems regulated, at least in part, by endogenous opioids. For example, the hypothalamic-pituitary-gonadal axis and the hypothalamic-pituitary-adrenal axis are abnormal in heroin addicts. Women on heroin have irregular menses, and men have a variety of sexual performance problems. Mood also is affected. Heroin addicts are relatively docile and compliant after taking heroin, but during withdrawal, they become irritable and aggressive. Figure 244. Differences in Responses to Heroin and Methadone. A person who injects heroin several times per day oscillates between being sick and being high. In contrast, the typical methadone patient remains in the "normal" range (indicated in gray) with little fluctuation after dosing once per day. The curves represent the subject's mental and physical state and not plasma levels of the drug.

Based on patient reports, tolerance develops early to the euphoria-producing effects of opioids. There also is tolerance to the respiratory depressant, analgesic, sedative, and emetic properties. Heroin users tend to increase their daily dose, depending on their financial resources and the availability of the drug. If a supply is available, the dose can be progressively increased 100-fold.

Even in highly tolerant individuals, the possibility of overdose remains if tolerance is exceeded. Overdose is likely to occur when potency of the street sample is unexpectedly high or when the heroin is mixed with a far more potent opioid, such as fentanyl, synthesized in clandestine laboratories. Addiction to heroin or other short-acting opioids produces behavioral disruptions and usually becomes incompatible with a productive life. There is a significant risk for opioid abuse and dependence among physicians and other health-care workers who have access to potent opioids, thus enabling unsupervised experimentation. Physicians often begin by assuming that they can manage their own dose, and they may rationalize their behavior based on the beneficial effects of the drug. Over time, however, the typical unsupervised opioid user loses control, and behavioral changes are observed by family and coworkers. Apart from the behavioral changes and the risk of overdose, especially with very potent opioids, chronic use of opioids is relatively nontoxic. Opioids frequently are used in combinations with other drugs. A common combination is heroin and cocaine ("speedball"). Users report an improved euphoria because of the combination, and there is evidence of an interaction, because the partial opioid agonist buprenorphine reduces cocaine selfadministration in animals (Mello et al., 1989). Cocaine reduces the signs of opioid withdrawal (Kosten, 1990), and heroin may reduce the irritability seen in chronic cocaine users. The mortality rate for street heroin users is very high. Early death comes from involvement in crime to support the habit; from uncertainty about the dose, the purity, and even the identity of what is purchased on the street; and from serious infections associated with unsterile drugs and sharing of injection paraphernalia. Heroin users commonly acquire bacterial infections producing skin abscesses, endocarditis, pulmonary infections, especially tuberculosis, and viral infections producing hepatitis and acquired immunodeficiency syndrome (AIDS). As with other addictions, the first stage of treatment addresses physical dependence and consists of detoxification. The opioid withdrawal syndrome (Table 247) is very unpleasant but not lifethreatening. It begins within 6 to 12 hours after the last dose of a short-acting opioid and as long as 72 to 84 hours after a very long-acting opioid medication. Heroin addicts go through early stages of this syndrome frequently when heroin is scarce or expensive. Some therapeutic communities, as a matter of policy, elect not to treat withdrawal so that the addict can experience the suffering while being given group support. The duration and intensity of the syndrome are related to the clearance of the individual drug. Heroin withdrawal is brief (5 to 10 days) and intense. Methadone withdrawal is slower in onset and lasts longer. Protracted withdrawal also is likely to be longer with methadone. (See more detailed discussions of protracted withdrawal under "Long-Term Management," below.) Pharmacological Interventions Opioid withdrawal signs and symptoms can be treated by three different approaches. The first and most commonly used depends on cross-tolerance and consists of transfer to a prescription opioid medication and then gradual dose reduction. The same principles of detoxification apply as for other types of physical dependence. It is convenient to change the patient from a short-acting opioid such as heroin to a long-acting one such as methadone. The initial dose of methadone is typically 20 to 30 mg. This is a test dose to determine the level needed to reduce observed withdrawal symptoms. The first day's total dose then can be calculated depending on the response and then reduced by 20% per day during the course of detoxification. A second approach to detoxification involves the use of clonidine, a medication approved only for the treatment of hypertension (see Chapter 33: Antihypertensive Agents and the Drug Therapy of

Hypertension). Clonidine is an 2-adrenergic agonist that decreases adrenergic neurotransmission from the locus ceruleus. Many of the autonomic symptoms of opioid withdrawalsuch as nausea, vomiting, cramps, sweating, tachycardia, and hypertensionresult from the loss of opioid suppression of the locus ceruleus system during the abstinence syndrome. Clonidine, acting via distinct receptors but by cellular mechanisms that mimic opioid effects, can alleviate many of the symptoms of opioid withdrawal. However, clonidine does not alleviate generalized aches and opioid craving characteristic of opioid withdrawal. A similar drug, lofexidine (not yet available in the United States), is associated with less of the hypotension that limits the usefulness of clonidine in this setting. A third method of treating opioid withdrawal involves activation of the endogenous opioid system without medication. The techniques proposed include acupuncture and several methods of CNS activation utilizing transcutaneous electrical stimulation. While theoretically attractive, this has not yet been found to be practical. Rapid antagonist-precipitated opioid detoxification under general anesthesia has received considerable publicity, because it promises detoxification in several hours while the patient is unconscious and thus not experiencing withdrawal discomfort. A mixture of medications has been used, and morbidity and mortality as reported in the lay press are unacceptable, with no demonstrated advantage in long-term outcome. Long-Term Management If patients are simply discharged from the hospital after withdrawal from opioids, there is a high probability of a quick return to compulsive opioid use. Addiction is a chronic disorder that requires long-term treatment. There are numerous factors that influence relapse. One factor is that the withdrawal syndrome does not end in 5 to 7 days. There are subtle signs and symptoms often called the protracted withdrawal syndrome (Table 247) that persist for up to 6 months. Physiological measures tend to oscillate as though a new set point were being established (Martin and Jasinski, 1969); during this phase, outpatient drug-free treatment has a low probability of success, even when the patient has received intensive prior treatment while protected from relapse in a residential program. The most successful treatment for heroin addiction consists of stabilization on methadone. Patients who repeatedly relapse during drug-free treatment can be transferred directly to methadone without requiring detoxification. The dose of methadone must be sufficient to prevent withdrawal symptoms for at least 24 hours. Levomethadyl acetate hydrochloride (ORLAAM) is another maintenance option that will block withdrawal for 72 hours. Agonist Maintenance Patients receiving methadone or levomethadyl acetate will not experience the ups and downs they experienced while on heroin (Figure 244). Drug craving diminishes and may disappear. Neuroendocrine rhythms eventually are restored (Kreek, 1992). Because of cross-tolerance (from methadone to heroin), patients who inject street heroin report a reduced effect from usual heroin doses. This cross-tolerance effect is dose-related, so that higher methadone maintenance doses result in less illicit opioid use as determined by random urine testing. Patients become tolerant to the sedating effects of methadone and become able to attend school or function in a job. Opioids also have a persistent, mild, stimulating effect noticeable after tolerance to the sedating effect, such that reaction time is quicker and vigilance is increased on a stable dose of methadone.

Antagonist Treatment Another pharmacological option is opioid antagonist treatment. Naltrexone (see Chapter 23: Opioid Analgesics) is an antagonist with a high affinity for the -opioid receptor; it will competitively block the effects of heroin or other -opioid-receptor agonists. Naltrexone has almost no agonist effects of its own and will not satisfy craving or relieve protracted withdrawal symptoms. For these reasons, naltrexone treatment does not appeal to the average heroin addict, but it can be utilized after detoxification for patients with high motivation to remain opioid-free. Physicians, nurses, and pharmacists with opioid addiction problems have frequent access to opioid drugs and make excellent candidates for this treatment approach. New Treatment Options Two important advances in the treatment of opioid addiction are currently in clinical trials. Buprenorphine, a partial agonist at opioid receptors (see Chapter 23: Opioid Analgesics) produces minimal withdrawal symptoms, has a low potential for overdose, a long duration of action, and the ability to block heroin effects. In order to make treatment of opioid addiction more accessible, buprenorphine is proposed for use in physicians' offices rather than methadone programs. A depot formulation of naltrexone which provides 30 days of medication after a single injection is also in clinical trials. This formulation would eliminate the necessity of daily pill-taking and prevent relapse when the recently detoxified patient leaves a protected environment. Cocaine and Other Psychostimulants Cocaine More than 23 million Americans are estimated to have used cocaine at some time, but the number of current users declined from an estimated 8.6 million occasional users and 5.8 million regular users to 3.6 million who still identified themselves as sometimes using cocaine in 1995. The number of frequent users (at least weekly) has remained steady since 1991 at about 640,000 persons. Not all users become addicts, and the variables that influence this risk are discussed at the beginning of this chapter. A key factor is the widespread availability of relatively inexpensive cocaine in the alkaloidal (free base, "crack") form suitable for smoking and the hydrochloride powder form suitable for nasal or intravenous use. Drug abuse in men occurs about twice as frequently as in women. However, smoked cocaine use is particularly common in young women of childbearing age, who may use cocaine in this manner as commonly as do men. The reinforcing effects of cocaine and cocaine analogs correlate best with their effectiveness in blocking the transporter that recovers dopamine from the synapse. This leads to increased dopaminergic stimulation at critical brain sites (Ritz et al., 1987). However, cocaine also blocks both norepinephrine (NE) and serotonin (5-HT) reuptake, and chronic use of cocaine produces changes in these neurotransmitter systems as measured by reductions in the neurotransmitter metabolites MHPG (3-methoxy-4-hydroxyphenethyleneglycol) and 5-HIAA (5hydroxyindoleacetic acid). The general pharmacology and legitimate use of cocaine are discussed in Chapter 15: Local Anesthetics. Cocaine produces a dose-dependent increase in heart rate and blood pressure accompanied by increased arousal, improved performance on tasks of vigilance and alertness, and a sense of self-confidence and well-being. Higher doses produce euphoria, which has a brief duration

and often is followed by a desire for more drug. Involuntary motor activity, stereotyped behavior, and paranoia may occur after repeated doses. Irritability and increased risk of violence are found among heavy chronic users. The half-life of cocaine in plasma is about 50 minutes, but inhalant (crack) users typically desire more cocaine after 10 to 30 minutes. Intranasal and intravenous uses also result in a "high" of shorter duration than would be predicted by plasma cocaine levels, suggesting that a declining plasma concentration is associated with termination of the high and resumption of cocaine seeking. This theory is supported by positron emission tomography imaging studies using C 11-labeled cocaine, which show that the time course of subjective euphoria parallels the uptake and displacement of the drug in the corpus striatum (Volkow et al., 1999). Addiction is the most common complication of cocaine use. Some users, especially intranasal users, can continue intermittent use for years. Others become compulsive users despite elaborate methods to maintain control. Stimulants tend to be used much more irregularly than opioids, nicotine, and alcohol. Binge use is very common, and a binge may last hours to days, terminating only when supplies of the drug are exhausted. The major route for cocaine metabolism involves hydrolysis of each of its two ester groups. Benzoylecgonine, produced upon loss of the methyl group, represents the major urinary metabolite and can be found in the urine for 2 to 5 days after a binge. As a result, benzoylecgonine tests are useful for detecting cocaine use; heavy users have been found to have detectable amounts of the metabolite in urine for up to 10 days following a binge. Cocaine frequently is used in combination with other drugs. The cocaine-heroin combination is discussed above, with opioids. Alcohol is another drug that cocaine users take to reduce the irritability experienced during heavy cocaine use. Some develop alcohol addiction in addition to their cocaine problem. An important metabolic interaction occurs when cocaine and alcohol are taken concurrently. Some cocaine is transesterified to cocaethylene, which is equipotent to cocaine in blocking dopamine reuptake (Hearn et al., 1991). Toxicity Other risks of cocaine use, beyond the potential for addiction, involve cardiac arrhythmias, myocardial ischemia, myocarditis, aortic dissection, cerebral vasoconstriction, and seizures. Death from trauma also is associated with cocaine use (Marzuk et al., 1995). Pregnant cocaine users may experience premature labor and abruptio placentae (Chasnoff et al., 1989). Attributing the developmental abnormalities reported in infants born to cocaine-using women simply to cocaine use is confounded by the infant's prematurity, multiple drug exposure, and overall poor pre- and postnatal care. Cocaine has been reported to produce a prolonged and intense orgasm if taken prior to intercourse, and its use is associated with compulsive and promiscuous sexual activity. Long-term cocaine use, however, usually results in reduced sexual drive; complaints of sexual problems are common among cocaine users presenting for treatment. Psychiatric disordersincluding anxiety, depression, and psychosisare common in cocaine users who request treatment. While some of these psychiatric disorders undoubtedly existed prior to the stimulant use, many develop during the course of the drug abuse (McLellan et al., 1979). Tolerance, Dependence, and Withdrawal

Sensitization is a consistent finding in animal studies of cocaine and other stimulants. Sensitization is produced by intermittent use and typically is measured by behavioral hyperactivity. In human cocaine users, sensitization for the euphoric effect typically is not seen. On the contrary, most experienced users report requiring more cocaine over time to obtain euphoria, i.e., tolerance. In the laboratory, tachyphylaxis (rapid tolerance) has been observed with reduced effects when the same dose is given repeatedly in one session. Sensitization may involve conditioning (Figure 242). Cocaine users often report a strong response on seeing cocaine before it is administered, consisting of physiological arousal and increased drug craving (O'Brien et al., 1992). Sensitization in human beings has been linked to paranoid, psychotic manifestations of cocaine use based on the observation that cocaine-induced hallucinations are typically seen after long-term exposure (mean 35 months) in vulnerable users (Satel et al., 1991). Repeated administration may be required to sensitize the patient to experience paranoia. Since cocaine typically is used intermittently, even heavy users go through frequent periods of withdrawal or "crash." The symptoms of withdrawal seen in users admitted to the hospital are listed in Table 248. Careful studies of cocaine users during withdrawal show gradual diminution of these symptoms over 1 to 3 weeks (Weddington et al., 1990). Residual depression may be seen after cocaine withdrawal and should be treated with antidepressant agents if it persists (see Chapter 19: Drugs and the Treatment of Psychiatric Disorders: Depression and Anxiety Disorders). Pharmacological Interventions Since cocaine withdrawal generally is mild, treatment of withdrawal symptoms usually is not required. The major problem in treatment is not detoxification but helping the patient to resist the urge to restart compulsive cocaine use. Rehabilitation programs involving individual and group psychotherapy based on the principles of Alcoholics Anonymous and behavioral treatments based on reinforcing, cocaine-free urine tests result in significant improvement in the majority of cocaine users (Alterman et al., 1994; Higgins et al., 1994). Nonetheless, there is great interest in finding a medication that can aid in the rehabilitation of cocaine addicts. Numerous medications have been tried in clinical trials with cocaine addicts (O'Brien, 1997). While several drugs have been reported in individual studies to produce significant reductions in cocaine use, none has been found to be associated with consistent improvement in controlled clinical trials. The dopamine and serotonin systems have been the focus of many unsuccessful studies using both agonist and antagonist approaches. The concept that works well for opioid addiction, that of a longacting agonist to satisfy drug craving and stabilize the patient so that normal function is possible, is difficult to transfer to the pharmacology of stimulants. Recent attention has been directed toward two novel approaches: a compound that competes with cocaine at the dopamine transporter and a vaccine that produces cocaine-binding antibodies. However, these should be regarded as innovative ideas that have yet to be shown to be clinically useful. For now, the treatment of choice for cocaine addiction remains behavioral, with medication indicated for specific coexisting disorders such as depression. Other CNS Stimulants Amphetamine and Related Agents Subjective effects similar to those of cocaine are produced by amphetamine, dextroamphetamine, methamphetamine, phenmetrazine, methylphenidate and diethylpropion. Amphetamines increase synaptic dopamine primarily by stimulating presynaptic release rather than by blockade of reuptake, as is the case with cocaine. Intravenous or smoked methamphetamine produces an abuse/dependence syndrome similar to that of cocaine, although clinical deterioration may progress

more rapidly. Methamphetamine can be produced in small, clandestine laboratories starting with ephedrine, a widely available nonprescription stimulant. It became a major problem in the western United States during the late 1990s. Oral stimulants, such as those prescribed in weight-reduction programs, have short-term efficacy because of tolerance development. Only a small proportion of patients introduced to these appetite suppressants subsequently exhibit dose escalation or drugseeking from various physicians. Such patients may meet diagnostic criteria for abuse or addiction. Fenfluramine (no longer marketed in the United States) and phenylpropanolamine reduce appetite with no evidence of significant abuse potential. Mazindol also reduces appetite, with less stimulant properties than amphetamine. Khat is a plant material widely chewed in East Africa and Yemen for its stimulant properties; these are due to alkaloidal cathinone, a compound similar to amphetamine (Kalix, 1990). Methcathinone, a congener with similar effects, has been synthesized in clandestine laboratories throughout the midwestern United States, but widespread use in North America has not been reported. Caffeine Caffeine, a mild stimulant, is the most widely used psychoactive drug in the world. It is present in soft drinks, coffee, tea, cocoa, chocolate, and numerous prescription and over-the-counter drugs. It increases norepinephrine secretion and enhances neural activity in numerous brain areas. Caffeine is absorbed from the digestive tract; it is rapidly distributed throughout all tissues and easily crosses the placental barrier (see Chapter 28: Drugs Used in the Treatment of Asthma). Many of caffeine's effects are believed to occur by means of competitive antagonism at adenosine receptors. Adenosine is a neuromodulator that influences a number of functions in the CNS (see Chapters 12: Neurotransmission and the Central Nervous System and 28: Drugs Used in the Treatment of Asthma). The mild sedating effects that occur when adenosine activates particular adenosine receptor subtypes can be antagonized by caffeine. Tolerance occurs rapidly to the stimulating effects of caffeine. Thus, a mild withdrawal syndrome has been produced in controlled studies by abrupt cessation of as little as one to two cups of coffee per day. Caffeine withdrawal consists of feelings of fatigue and sedation. With higher doses, headaches and nausea have been reported during withdrawal; vomiting is rare (Silverman et al., 1992). Although a withdrawal syndrome can be demonstrated, few caffeine users report loss of control of caffeine intake or significant difficulty in reducing or stopping caffeine if desired (Dews et al., 1999). Thus caffeine is not listed in the category of addicting stimulants (American Psychiatric Association, 1994). Cannabinoids (Marijuana) The cannabis plant has been cultivated for centuries both for the production of hemp fiber and for its presumed medicinal and psychoactive properties. The smoke from burning cannabis contains many chemicals, including 61 different cannabinoids that have been identified. One of these, -9tetrahydrocannabinol ( -9-THC), produces most of the characteristic pharmacological effects of smoked marijuana. Surveys have shown that marijuana is the most commonly used nonlegal drug in the United States. Usage peaked during the late 1970s, when about 60% of high school seniors reported having used marijuana and nearly 11% reported daily use. This declined steadily among high school seniors to about 40% reporting some use during their lifetime and 2% reporting daily use in the mid-1990s, followed by a gradual increase to more than 5% reporting daily use in 1999. It must be noted that surveys among high school seniors tend to underestimate drug use because school dropouts are not

surveyed. A cannabinoid receptor has been identified in the brain (Devane et al., 1988) and cloned (Matsuda et al., 1990). An arachidonic acid derivative has been proposed as an endogenous ligand and named anandamide (Devane et al., 1992). While the physiological function of these receptors or their putative endogenous ligand has not been fully elucidated, they are widely dispersed, with high densities in the cerebral cortex, hippocampus, striatum, and cerebellum (Herkenham, 1993). Specific cannabinoid receptor antagonists have been developed, and these should facilitate understanding the role of this neurotransmitter system, not only in marijuana abuse but also in normal CNS functions. The pharmacological effects of -9-THC vary with the dose, route of administration, experience of the user, vulnerability to psychoactive effects, and setting of use. Intoxication with marijuana produces changes in mood, perception, and motivation, but the effect sought after by most users is the "high" and "mellowing out." This effect is described as different from the stimulant high and the opiate high. The effects vary with dose, but the typical marijuana smoker experiences a high that lasts about two hours. During this time, there is impairment of cognitive functions, perception, reaction time, learning, and memory. Impairment of coordination and tracking behavior has been reported to persist for several hours beyond the perception of the high. These impairments have obvious implications for the operation of a motor vehicle and performance in the workplace or at school. Marijuana also produces complex behavioral changes, such as giddiness and increased hunger. Although some users have reported increased pleasure from sex and increased insight during a marijuana high, these claims have not been substantiated. Unpleasant reactions such as panic or hallucinations and even acute psychosis may occur; several surveys indicate that 50% to 60% of marijuana users have reported at least one anxiety experience. These reactions commonly are seen with higher doses and with oral rather than smoked marijuana, because smoking permits the regulation of dose according to the effects. While there is no convincing evidence that marijuana can produce a lasting schizophrenia-like syndrome, there are numerous clinical reports that marijuana use can precipitate a recurrence in people with a history of schizophrenia. One of the most controversial of the effects that have been claimed for marijuana is the production of an "amotivational syndrome." This syndrome is not an official diagnosis, but it has been used to describe young people who drop out of social activities and show little interest in school, work, or other goal-directed activity. When heavy marijuana use accompanies these symptoms, the drug often is cited as the cause, even though there are no data that demonstrate a causal relationship between marijuana smoking and these behavioral characteristics. There is no evidence that marijuana use damages brain cells or produces any permanent functional changes, although there are animal data indicating impairment of maze learning that persists for weeks after the last dose. These findings are consistent with clinical reports of gradual improvement in mental state after cessation of chronic high-dose marijuana use. Several medicinal benefits of marijuana have been described. These include antinausea effects that have been applied to the relief of side effects of anticancer chemotherapy, muscle-relaxing effects, anticonvulsant effects, and reduction of intraocular pressure for the treatment of glaucoma. These medical benefits come at the cost of the psychoactive effects that often impair normal activities. Thus, there is no clear advantage of marijuana over conventional treatments for any of these indications (Institute of Medicine, 1999). With the cloning of cannabinoid receptors and the discovery of an endogenous ligand, it is hoped that medications can be developed that will produce

specific therapeutic effects without the undesirable properties of marijuana. Tolerance, Dependence, and Withdrawal Tolerance to most of the effects of marijuana can develop rapidly after only a few doses, but it also disappears rapidly. Tolerance to large doses has been found to persist in experimental animals for long periods after cessation of drug use. Withdrawal symptoms and signs are not typically seen in clinical populations. In fact, relatively few patients ever seek treatment for marijuana addiction. A withdrawal syndrome in human subjects has been described following close observation of marijuana users given regular oral doses of the agent on a research ward (Table 249). This syndrome, however, is seen clinically only in persons who use marijuana on a daily basis and then suddenly stop. Compulsive or regular marijuana users do not appear to be motivated by fear of withdrawal symptoms, although this has not been systematically studied. Pharmacological Interventions Marijuana abuse and addiction have no specific treatments. Heavy users may suffer from accompanying depression and thus may respond to antidepressant medication, but this should be decided on an individual basis considering the severity of the affective symptoms after the marijuana effects have dissipated. The residual drug effects may continue for several weeks. Psychedelic Agents Perceptual distortions that include hallucinations, illusions, and disorders of thinking such as paranoia can be produced by toxic doses of many drugs. These phenomena also may be seen during toxic withdrawal from sedatives such as alcohol. There are, however, certain drugs that have as their primary effect the production of perception, thought, or mood disturbances at low doses with minimal effects on memory and orientation. These are commonly called hallucinogenic drugs, but their use does not always result in frank hallucinations. In the late 1990s, the use of "club drugs" at all-night dance parties became popular. Such drugs include methylenedioxymethamphetamine ("Ecstasy," MDMA), lysergic acid diethylamide (LSD), phencyclidine (PCP), and ketamine. They often are used in association with illegal sedatives such as flunitrazepam (ROHYPNOL) or gamma hydroxybutyrate (GHB). The latter drug has the reputation of being particularly effective in preventing memory storage, so it has been implicated in "date rapes." The use of psychedelics received much public attention in the 1960s and 1970s, but their use waned in the 1980s. In 1989, the use of hallucinogenic drugs again began to increase in the United States. By 1993, a total of 11.8% of college students were reporting some use of these drugs during their lifetime. The increase was most striking in younger cohorts, beginning in the eighth grade. While psychedelic effects can be produced by a variety of different drugs, major psychedelic compounds come from two main categories. The indoleamine hallucinogens include LSD, DMT (N,N-dimethyltryptamine), and psilocybin. The phenethylamines include mescaline, dimethoxymethylamphetamine (DOM), methylenedioxyamphetamine (MDA), and MDMA. Both groups have a relatively high affinity for serotonin 5-HT2 receptors (see Chapter 11: 5Hydroxytryptamine (Serotonin): Receptor Agonists and Antagonists), but they differ in their affinity for other subtypes of 5-HT receptors. There is a good correlation between the relative affinity of these compounds for 5-HT2 receptors and their potency as hallucinogens in human beings (Rivier and Pilet, 1971; Titeler et al., 1988). The 5-HT2 receptor is further implicated in the mechanism of hallucinations by the observation that antagonists of that receptor, such as ritanserin, are effective in blocking the behavioral and electrophysiological effects of hallucinogenic drugs in

animal models. However, LSD has been shown to interact with many receptor subtypes at nanomolar concentrations, and at present it is not possible to attribute the psychedelic effects to any single 5-HT receptor subtype (Peroutka, 1994). LSD LSD is the most potent hallucinogenic drug and produces significant psychedelic effects with a total dose of as little as 25 to 50 g. This drug is more than 3000 times more potent than mescaline. LSD is sold on the illicit market in a variety of forms. A popular contemporary system involves postage stampsized papers impregnated with varying doses of LSD (50 to 300 g or more). A majority of street samples sold as LSD actually contain LSD. In contrast, the samples of mushrooms and other botanicals sold as sources of psilocybin and other psychedelics have a low probability of containing the advertised hallucinogen. The effects of hallucinogenic drugs are variable, even in the same individual on different occasions. LSD is rapidly absorbed after oral administration, with effects beginning at 40 to 60 minutes, peaking at 2 to 4 hours, and gradually returning to baseline over 6 to 8 hours. At doses of 100 g, LSD produces perceptual distortions and sometimes hallucinations; mood changes including elation, paranoia, or depression; intense arousal; and sometimes a feeling of panic. Signs of LSD ingestion include pupillary dilation, increased blood pressure and pulse, flushing, salivation, lacrimation, and hyperreflexia. Visual effects are prominent. Colors seem more intense and shapes may appear altered. The subject may focus attention on unusual items such as the pattern of hairs on the back of the hand. Claims about the potential of psychedelic drugs for enhancing psychotherapy and for treating addictions and other mental disorders have not been supported by controlled treatment outcome studies. Consequently, there is no current indication for these drugs as medications. A "bad trip" usually consists of severe anxiety, although at times it is marked by intense depression and suicidal thoughts. Visual disturbances usually are prominent. The bad trip from LSD may be difficult to distinguish from reactions to anticholinergic drugs and phencyclidine. There are no documented toxic fatalities from LSD use, but fatal accidents and suicides have occurred during or shortly after intoxication. Prolonged psychotic reactions lasting two days or more may occur after the ingestion of a hallucinogen. Schizophrenic episodes may be precipitated in susceptible individuals, and there is some evidence that chronic use of these drugs is associated with the development of persistent psychotic disorders (McLellan et al., 1979). Tolerance, Physical Dependence, and Withdrawal Frequent, repeated use of psychedelic drugs is unusual, and thus tolerance is not commonly seen. Tolerance does develop to the behavioral effects of LSD after three to four daily doses, but no withdrawal syndrome has been observed. Cross-tolerance among LSD, mescaline, and psilocybin has been demonstrated in animal models. Pharmacological Intervention Because of the unpredictability of psychedelic drug effects, any use carries some risk. Dependence and addiction do not occur, but users may require medical attention because of "bad trips." Severe agitation may require medication, and diazepam (20 mg orally) has been found to be effective.

"Talking down" by reassurance also has been shown to be effective and is the management of first choice. Neuroleptic medications (dopamine receptor antagonists; see Chapter 20: Drugs and the Treatment of Psychiatric Disorders: Psychosis and Mania) may intensify the experience. A particularly troubling aftereffect of the use of LSD and similar drugs is the occurrence of episodic visual disturbances in a small proportion of former users. These originally were called "flashbacks" and resembled the experiences of prior LSD trips. There now is an official diagnostic category called the hallucinogen persisting perception disorder (HPPD) (American Psychiatric Association, 1994). The symptoms include false fleeting perceptions in the peripheral fields, flashes of color, geometric pseudohallucinations, and positive afterimages (Abraham and Aldridge, 1993). The visual disorder appears stable in half of the cases and represents an apparently permanent alteration of the visual system. Precipitants include stress, fatigue, entry into a dark environment, marijuana, neuroleptics, and anxiety states. MDMA ("Ecstasy") and MDA MDMA and MDA are phenylethylamines that have stimulant as well as psychedelic effects. MDMA became popular during the 1980s on college campuses because of testimonials that it enhances insight and self-knowledge. It was recommended by some psychotherapists as an aid to the process of therapy, although no controlled data exist to support this contention. Acute effects are dose-dependent and include tachycardia, dry mouth, jaw clenching, and muscle aches. At higher doses, the effects include visual hallucinations, agitation, hyperthermia, and panic attacks. MDA and MDMA produce degeneration of serotonergic nerve cells and axons in rats. While nerve degeneration has not been demonstrated in human beings, the cerebrospinal fluid of chronic MDMA users has been found to contain low levels of serotonin metabolites (Ricaurte et al., 2000). Thus there is possible neurotoxicity with no evidence that the claimed benefits of MDMA actually occur. Phencyclidine (PCP) PCP deserves special mention because of its widespread availability and because its pharmacological effects are different from those of the psychedelics such as LSD. PCP originally was developed as an anesthetic in the 1950s and later was abandoned because of a high frequency of postoperative delirium with hallucinations. It was classed as a dissociative anesthetic because, in the anesthetized state, the patient remains conscious with staring gaze, flat facies, and rigid muscles. PCP became a drug of abuse in the 1970s, first in an oral form and then in a smoked version enabling a better regulation of the dose. The effects of PCP have been observed in normal volunteers under controlled conditions. As little as 50 g/kg produces emotional withdrawal, concrete thinking, and bizarre responses to projective testing. Catatonic posturing also is produced and resembles that of schizophrenia. Abusers taking higher doses may appear to be reacting to hallucinations and exhibit hostile or assaultive behavior. Anesthetic effects increase with dosage; stupor or coma may occur with muscular rigidity, rhabdomyolysis, and hyperthermia. Intoxicated patients in the emergency room may progress from aggressive behavior to coma, with elevated blood pressure and enlarged, nonreactive pupils. PCP binds with high affinity to sites located in the cortex and limbic structures, resulting in blocking of NMDA-type glutamate receptors (see Chapter 12: Neurotransmission and the Central Nervous System). LSD and other psychedelics do not bind NMDA receptors. There is evidence that NMDA receptors are involved in ischemic neuronal death caused by high levels of excitatory amino acids; as a result, there is interest in PCP analogs that block NMDA receptors but have fewer

psychoactive effects. Tolerance, Dependence, and Withdrawal PCP is reinforcing in monkeys, as evidenced by self-administration patterns that produce continuous intoxication (Balster et al., 1973). Human beings tend to use PCP intermittently, but some surveys report daily use in 7% of users queried. There is evidence for tolerance to the behavioral effects of PCP in animals, but this has not been studied systematically in human beings. Signs of a PCP withdrawal syndrome were observed in monkeys after interruption of daily access to the drug. These include somnolence, tremor, seizures, diarrhea, piloerection, bruxism, and vocalizations. Pharmacological Intervention Overdose must be treated by life support, since there is no antagonist of PCP effects and no proven way to enhance excretion, although acidification of the urine has been proposed. PCP coma may last 7 to 10 days. The agitated or psychotic state produced by PCP can be treated with diazepam. Prolonged psychotic behavior requires neuroleptic medication such as haloperidol. Because of the anticholinergic activity of PCP, neuroleptics with significant anticholinergic effects, such as chlorpromazine, should be avoided. Inhalants Abused inhalants consist of many different categories of chemicals that are volatile at room temperature and produce abrupt changes in mental state when inhaled. Examples include toluene (from airplane glue), kerosene, gasoline, carbon tetrachloride, amyl nitrite, and nitrous oxide (see Chapter 68: Nonmetallic Environmental Toxicants: Air Pollutants, Solvents and Vapors, and Pesticides for a discussion of the toxicology of such agents). There are characteristic patterns of response for each substance. Solvents such as toluene typically are used by children. The material usually is placed in a plastic bag and the vapors inhaled. After several minutes of inhalation, dizziness and intoxication occur. Aerosol sprays containing fluorocarbon propellants are another source of solvent intoxication. Prolonged exposure or daily use may result in damage to several organ systems. Clinical problems include cardiac arrhythmias, bone marrow depression, cerebral degeneration, and damage to liver, kidney, and peripheral nerves. Death occasionally has been attributed to inhalant abuse, probably via the mechanism of cardiac arrhythmias, especially accompanying exercise or upper airway obstruction. Amyl nitrite produces dilation of smooth muscle and has been used in the past for the treatment of angina. It is a yellow, volatile, flammable liquid with a fruity odor. In recent years, amyl nitrite and butyl nitrite have been used to relax smooth muscle and enhance orgasm, particularly by male homosexuals. It is obtained in the form of room deodorizers and can produce a feeling of "rush," flushing, and dizziness. Adverse effects include palpitations, postural hypotension, and headache progressing to loss of consciousness. Anesthetic gases such as nitrous oxide or halothane are sometimes used as intoxicants by medical personnel. Nitrous oxide also is abused by food service employees, because it is supplied for use as a propellant in disposable aluminum minitanks for whipped-cream canisters. Nitrous oxide produces euphoria and analgesia and then loss of consciousness. Compulsive use and chronic toxicity rarely are reported, but there are obvious risks of overdose associated with the abuse of this anesthetic. Chronic use has been reported to cause peripheral neuropathy.

Treatment of Drug Abuse and Addiction The management of drug abuse and addiction must be individualized according to the drugs involved and to the associated psychosocial problems of the individual patient. Pharmacological interventions have been described for each category when medications are available. An understanding of the pharmacology of the drug or combination of drugs ingested by the patient is essential to rational and effective treatment. This may be a matter of urgency for the treatment of overdose or for the detoxification of a patient who is experiencing withdrawal symptoms. It must be recognized, however, that the treatment of the underlying addictive disorder requires months or years of rehabilitation. The behavior patterns encoded during thousands of prior drug ingestions do not disappear with detoxification from the drug, even after a typical 28-day inpatient rehabilitation program. Long periods of outpatient treatment are necessary. There probably will be periods of relapse and remission. While complete abstinence is the preferred goal, in reality most patients are at risk to slip back to drug-seeking behavior and require a period of retreatment. Maintenance medication can be effective in some circumstances, such as methadone for opioid dependence. The process can best be compared to the treatment of other chronic disorders such as diabetes, asthma, or hypertension. Long-term medication may be necessary, and cures are not likely. When viewed in the context of chronic disease, the available treatments for addiction are quite successful (McLellan et al., 1992; O'Brien, 1994). Long-term treatment is accompanied by improvements in physical status as well as in mental, social, and occupational function. Unfortunately, there is general pessimism in the medical community about the benefits of treatment, so that most of the therapeutic effort is directed at the complications of addiction, such as pulmonary, cardiac, and hepatic disorders. Prevention of these complications can be accomplished by addressing the underlying addictive disorder. For further discussion of alcoholism and drug dependency see Chapters 372 to 375 in Harrison's Principles of Internal Medicine, 16th ed., McGraw-Hill, New York, 2005.

You might also like