Professional Documents
Culture Documents
Lista - AFA
https://www.tecconcursos.com.br/s/Q2mbSz
Ordenação: Por Matéria
www.tecconcursos.com.br/questoes/2077204
Transcript:
Q: To what extent, if at all, do you feel that your generation will have had a better or worse life than your parent’s generation, or will it be about the same?
Key: Better
Total
Great Britain
Under 30s
1 China 78%
2 Brazil 48%
3 Turkey 47%
4 India 46%
5 Japan 41%
6 Russia 41%
7 S. Africa 41%
T Total 37%
8 Argentina 34%
9 Sweden 32%
10 Australia 30%
11 Germany 30%
12 Poland 30%
13 S. Korea 27%
14 US 26%
15 Canada 24%
16 GB 22%
17 Italy 21%
18 Spain 16%
19 France 16%
20 Belgium 12%
The verb tense used in your generation will have had a better or worse life
www.tecconcursos.com.br/questoes/2077240
(SUMMER, Bernard; GILBERT, Gillian; HOOK, Peter; MORRIS, Stephen. Lyrics to Love Vigilantes, performed by New Order, Low Life CD, track 1, Universal Music Publishing Group, 1986. Taken from
https://lyricfind.com)
www.tecconcursos.com.br/questoes/2077286
Elon Musk, the CEO of SpaceX and Tesla, caused something of a stir at the annual Air Force Association’s Air Warfare Symposium in Orlando, Florida. During an [A]
informal ‘fireside chat’ with Lt. Gen. John Thompson of Space Command on Friday, Musk told a room packed with fighter pilots that “the fighter jet era has passed”. Now,
Musk is renowned for finding soundbites for the media to hook onto, and also of causing [D] calculated disruption in his target audience, but even by his standards this is
a well-baited line. What makes this such a bold and controversial statement?
The contemporary battlefield is far more networked, complex, congested and lethal – it’s those that adapt that will survive. Simply being ‘a good stick’ will rapidly no
longer cut it in the hostile skies of the future.
Musk’s main thrust is, however, that the human in the cockpit is the limiting factor in air combat now, not the advantage.
Firstly, humans are awkward for aircraft designers to accommodate. In a modern fighter the pilot sits on an ejection seat for emergency escape, needs an oxygen supply
and air-conditioning to function at extremes of temperature and altitude, and requires flight controls and instruments to fly and fight the machine. All of these add a
significant amount of weight and cost into the aircraft. Moreover, having a transparent canopy to permit the crew to see out does [C] little to improve signature
management1 as radar cross section and the chances of reflected sun glint all increase. The human also adds more weight, especially when encumbered with flight kit
such as helmet, NVGs, g-suit and flight planning and survival equipment. Traditionally, pilots sit toward the front of the aircraft to afford them the best view for take-off
and landing, as well as air combat. This has implications on aerodynamics, stealth1 design and weight distribution factors. In short, it can make the designers life a lot [B]
simpler if the pilot/crew are not in the aircraft.
Vocabulary:
1. Signature management and stealth: both terms refer to technology that reduces the likelihood of personnel, aircrafts, missiles, etc. being detected.
Analyzing the grammatical aspects of the text, and considering the standard use of language, it’s correct to say that
a) an introduces a description.
b) a lot can be replaced by very.
c) does is being used as an auxiliary.
d) causing can be used in the form cause.
www.tecconcursos.com.br/questoes/1655160
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd _______ by social science for a long time. Nevertheless, those findings do not always make their way to
individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
In paragraph 2, the option that fits the gap appropriately in standard language is
a) have been studied.
b) has been studied.
c) has studied.
d) is studied.
www.tecconcursos.com.br/questoes/1655169
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
www.tecconcursos.com.br/questoes/1655170
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
www.tecconcursos.com.br/questoes/1473425
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object
(approximately 50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London,
millions of people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
The statement “many people were injured by falling glass” stands for
www.tecconcursos.com.br/questoes/1473428
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London,
millions of people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
“If it had exploded over a city such as Moscow or London, millions of people would have been killed”. We can conclude from the information in this passage that
a) because of an explosion, millions of people died.
b) experts managed to save people from an explosion.
c) an explosion will hit both cities killing millions of people.
d) the explosion and the millions of deaths are hypothetical.
www.tecconcursos.com.br/questoes/1473430
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
A)
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
B)
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists
were expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone
to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
C)
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
D)
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete
destruction of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
In “scientists were expecting that visit”, the underlined word has the same use as in
www.tecconcursos.com.br/questoes/1474184
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
“Which” refers to
a) the sun.
b) comets and asteroids.
c) cosmic guardian.
d) Earth.
www.tecconcursos.com.br/questoes/1474186
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
In the sentence “the dangerous comets and asteroids which might cross Earth’s orbit”, the underlined word is similar to
a) must.
b) should.
c) could.
d) shall.
www.tecconcursos.com.br/questoes/1260474
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
a) superlative.
b) comparative of superiority.
c) comparative of inferiority.
d) comparative of equality.
www.tecconcursos.com.br/questoes/1260483
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
Mark the sentence in which “that” can correctly replace the pronoun.
www.tecconcursos.com.br/questoes/1260487
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
In the fragment “music therapy with post-hospital curative treatment could have its main focus on psychological aspects” the pronoun refers to
a) music therapy and post-hospital curative treatment.
www.tecconcursos.com.br/questoes/1264686
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality , they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
“ Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality [F]”
a) to perform an action.
www.tecconcursos.com.br/questoes/1264687
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
Mark the alternative which has the sentence below correctly reported.
The author
b) said that their fascination with fantastic fiends had been healthy.
c) told the readers their fascination with fantastic fiends has been healthy.
www.tecconcursos.com.br/questoes/1264689
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
The sentences below are used in the interrogative form. Mark the one that is grammatically correct.
a) Why does evil itself intrigues us?
www.tecconcursos.com.br/questoes/1264690
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself . Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves .
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction . Alternately, a healthy person simply might focus on how all characters
assert themselves in any given story .
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
Mark the option in which the underlined word makes it clear that the subject and the object are the same.
a) [...] they exist outside the limits of reality itself.
c) [...] become heroic might twist the need for selfassertion into destruction.
www.tecconcursos.com.br/questoes/1264691
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
The sentence “[...] Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing.” means the psychologist believes that
a) if people don’t become mature, they will have trouble meeting their basic needs.
b) if Abraham Maslow hadn’t met his basic needs, people would have had difficulty maturing.
c) unless people fulfill their basic necessities, getting mature won’t be easy for them.
d) unless one gets their basic necessities, they won’t have difficulty maturing.
www.tecconcursos.com.br/questoes/1264694
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress
like Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as
villains get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
b) People who haven't met their most basic needs will have difficulty maturing.
c) Humans have been captivated by stories of heroes facing off against superhuman foes.
d) We have needed heroes who rise to the occasion, overcome great odds and take down giants.
www.tecconcursos.com.br/questoes/2077201
Transcript:
Q: To what extent, if at all, do you feel that your generation will have had a better or worse life than your parent’s generation, or will it be about the same?
Key: Better
Total
Great Britain
Under 30s
1 China 78%
2 Brazil 48%
3 Turkey 47%
4 India 46%
5 Japan 41%
6 Russia 41%
7 S. Africa 41%
T Total 37%
8 Argentina 34%
9 Sweden 32%
10 Australia 30%
11 Germany 30%
12 Poland 30%
13 S. Korea 27%
14 US 26%
15 Canada 24%
16 GB 22%
17 Italy 21%
18 Spain 16%
19 France 16%
20 Belgium 12%
Mark the option in which the information DISAGREES with the chart.
a) Almost a quarter of Canadian youth feel that their future living conditions will improve.
b) India, Brazil and Turkey are more pessimistic than optimistic for their young compared to the US.
c) The young in China are more likely to think things will be better rather than worse.
d) Belgium, Spain and France ranked towards the bottom.
www.tecconcursos.com.br/questoes/2077208
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of
incredulity, it was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had
nothing before us, we were all going direct to Heaven, we were all going the other way – in short, the period was so far like the present period, that some of its
noisiest authorities insisted on its being received, for good or for evil, in the superlative degree of comparison only.
www.tecconcursos.com.br/questoes/2077214
Franval who lived in Paris, where he was born, possessed, along with an income of 400,000 livres, the finest figure, the most pleasant face and the most varied
talents; but beneath this attractive exterior lay hidden every vice, and unfortunately those of which the adoption and habitual indulgence lead so rapidly to crime.
An imagination more unbridled than anything one can depict was Franval’s prime defect; men of this kind do not mend their ways, the decline of power makes
them worse; the less they do, the more they undertake; the less they achieve, the more they invent; each age brings new ideas, and satiety, far from cooling their
ardour, only prepares the way for more fatal refinements.
(SADE, Marquis de. The Gothic Tales of the Marquis de Sade. 2000.)
www.tecconcursos.com.br/questoes/2077217
TEXT I
It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness, it was the epoch of belief, it was the epoch of incredulity, it
was the season of Light, it was the season of Darkness, it was the spring of hope, it was the winter of despair, we had everything before us, we had nothing before us,
we were all going direct to Heaven, we were all going the other way – in short, the period was so far like the present period, that some of its noisiest authorities insisted
on its being received, for good or for evil, in the superlative degree of comparison only.
TEXT II
Franval who lived in Paris, where he was born, possessed, along with an income of 400,000 livres, the finest figure, the most pleasant face and the most varied talents;
but beneath this attractive exterior lay hidden every vice, and unfortunately those of which the adoption and habitual indulgence lead so rapidly to crime. An imagination
more unbridled than anything one can depict was Franval’s prime defect; men of this kind do not mend their ways, the decline of power makes them worse; the less they
do, the more they undertake; the less they achieve, the more they invent; each age brings new ideas, and satiety, far from cooling their ardour, only prepares the way for
more fatal refinements.
(SADE, Marquis de. The Gothic Tales of the Marquis de Sade. 2000.)
A similarity between Text I and Text II can be found in the following aspect(s):
www.tecconcursos.com.br/questoes/2077222
(YEATS, W.B. An Irish Airman Foresees His Death in Rhyme and Reason, An Anthology. Org. O’MALLEY, Raymond. Hart-Davis Educational.)
Vocabulary:
a) meet my fate.
b) waste of breath.
c) tumult in the clouds.
d) impulse of delight.
www.tecconcursos.com.br/questoes/2077225
(YEATS, W.B. An Irish Airman Foresees His Death in Rhyme and Reason, An Anthology. Org. O’MALLEY, Raymond. Hart-Davis Educational.)
Vocabulary:
www.tecconcursos.com.br/questoes/2077226
(YEATS, W.B. An Irish Airman Foresees His Death in Rhyme and Reason, An Anthology. Org. O’MALLEY, Raymond. Hart-Davis Educational.)
Vocabulary:
www.tecconcursos.com.br/questoes/2077238
(SUMMER, Bernard; GILBERT, Gillian; HOOK, Peter; MORRIS, Stephen. Lyrics to Love Vigilantes, performed by New Order, Low Life CD, track 1, Universal Music Publishing Group, 1986. Taken from
https://lyricfind.com)
Read the statements below considering the aspects of grammar and meaning of the text.
III. Some parts of speech were left out of the phrase in line 4.
a) I and II.
b) I and V.
c) II and III.
d) III and IV.
www.tecconcursos.com.br/questoes/2077245
TEXT I
(YEATS, W.B. An Irish Airman Foresees His Death in Rhyme and Reason, An Anthology. Org. O’MALLEY, Raymond. Hart-Davis Educational.)
TEXT II
(SUMMER, Bernard; GILBERT, Gillian; HOOK, Peter; MORRIS, Stephen. Lyrics to Love Vigilantes, performed by New Order, Low Life CD, track 1, Universal Music Publishing Group, 1986. Taken from
https://lyricfind.com)
Whose characteristic is described below? Write 4 for the speaker in Text I, and 5 for the speaker in Text II.
www.tecconcursos.com.br/questoes/2077271
Elon Musk, the CEO of SpaceX and Tesla, caused something of a stir at the annual Air Force Association’s Air Warfare Symposium in Orlando, Florida. During an informal
‘fireside chat’ with Lt. Gen. John Thompson of Space Command on Friday, Musk told a room packed with fighter pilots that “the fighter jet era has passed”. Now, Musk is
renowned for finding soundbites for the media to hook onto, and also of causing calculated disruption in his target audience, but even by his standards this is a well-
baited line. What makes this such a bold and controversial statement?
The contemporary battlefield is far more networked, complex, congested and lethal – it’s those that adapt that will survive. Simply being ‘a good stick’ will rapidly no
longer cut it in the hostile skies of the future.
Musk’s main thrust is, however, that the human in the cockpit is the limiting factor in air combat now, not the advantage.
Firstly, humans are awkward for aircraft designers to accommodate. In a modern fighter the pilot sits on an ejection seat for emergency escape, needs an oxygen supply
and air-conditioning to function at extremes of temperature and altitude, and requires flight controls and instruments to fly and fight the machine. All of these add a
significant amount of weight and cost into the aircraft. Moreover, having a transparent canopy to permit the crew to see out does little to improve signature management1
as radar cross section and the chances of reflected sun glint all increase. The human also adds more weight, especially when encumbered with flight kit such as helmet,
NVGs, g-suit and flight planning and survival equipment. Traditionally, pilots sit toward the front of the aircraft to afford them the best view for take-off and landing, as
well as air combat. This has implications on aerodynamics, stealth1 design and weight distribution factors. In short, it can make the designers life a lot simpler if the
pilot/crew are not in the aircraft.
Vocabulary:
1. Signature management and stealth: both terms refer to technology that reduces the likelihood of personnel, aircrafts, missiles, etc. being detected.
www.tecconcursos.com.br/questoes/2077273
Elon Musk, the CEO of SpaceX and Tesla, caused something of a stir at the annual Air Force Association’s Air Warfare Symposium in Orlando, Florida. During an informal
‘fireside chat’ with Lt. Gen. John Thompson of Space Command on Friday, Musk told a room packed with fighter pilots that “the fighter jet era has passed”. Now, Musk is
renowned for finding soundbites for the media to hook onto, and also of causing calculated disruption in his target audience, but even by his standards this is a well-
baited line. What makes this such a bold and controversial statement?
The contemporary battlefield is far more networked, complex, congested and lethal – it’s those that adapt that will survive. Simply being ‘a good stick’ will rapidly no
longer cut it in the hostile skies of the future.
Musk’s main thrust is, however, that the human in the cockpit is the limiting factor in air combat now, not the advantage.
Firstly, humans are awkward for aircraft designers to accommodate. In a modern fighter the pilot sits on an ejection seat for emergency escape, needs an oxygen supply
and air-conditioning to function at extremes of temperature and altitude, and requires flight controls and instruments to fly and fight the machine. All of these add a
significant amount of weight and cost into the aircraft. Moreover, having a transparent canopy to permit the crew to see out does little to improve signature management1
as radar cross section and the chances of reflected sun glint all increase. The human also adds more weight, especially when encumbered with flight kit such as helmet,
NVGs, g-suit and flight planning and survival equipment. Traditionally, pilots sit toward the front of the aircraft to afford them the best view for take-off and landing, as
well as air combat. This has implications on aerodynamics, stealth1 design and weight distribution factors. In short, it can make the designers life a lot simpler if the
pilot/crew are not in the aircraft.
Vocabulary:
1. Signature management and stealth: both terms refer to technology that reduces the likelihood of personnel, aircrafts, missiles, etc. being detected.
www.tecconcursos.com.br/questoes/2077274
Elon Musk, the CEO of SpaceX and Tesla, caused something of a stir at the annual Air Force Association’s Air Warfare Symposium in Orlando, Florida. During an informal
‘fireside chat’ with Lt. Gen. John Thompson of Space Command on Friday, Musk told a room packed with fighter pilots that “the fighter jet era has passed”. Now, Musk is
renowned for finding soundbites for the media to hook onto, and also of causing calculated disruption in his target audience, but even by his standards this is a well-
baited line. What makes this such a bold and controversial statement?
The contemporary battlefield is far more networked, complex, congested and lethal – it’s those that adapt that will survive. Simply being ‘a good stick’ will rapidly no
longer cut it in the hostile skies of the future.
Musk’s main thrust is, however, that the human in the cockpit is the limiting factor in air combat now, not the advantage.
Firstly, humans are awkward for aircraft designers to accommodate. In a modern fighter the pilot sits on an ejection seat for emergency escape, needs an oxygen supply
and air-conditioning to function at extremes of temperature and altitude, and requires flight controls and instruments to fly and fight the machine. All of these add a
significant amount of weight and cost into the aircraft. Moreover, having a transparent canopy to permit the crew to see out does little to improve signature management1
as radar cross section and the chances of reflected sun glint all increase. The human also adds more weight, especially when encumbered with flight kit such as helmet,
NVGs, g-suit and flight planning and survival equipment. Traditionally, pilots sit toward the front of the aircraft to afford them the best view for take-off and landing, as
well as air combat. This has implications on aerodynamics, stealth1 design and weight distribution factors. In short, it can make the designers life a lot simpler if the
pilot/crew are not in the aircraft.
Vocabulary:
1. Signature management and stealth: both terms refer to technology that reduces the likelihood of personnel, aircrafts, missiles, etc. being detected.
www.tecconcursos.com.br/questoes/2077278
Elon Musk, the CEO of SpaceX and Tesla, caused something of a stir at the annual Air Force Association’s Air Warfare Symposium in Orlando, Florida. During an informal
‘fireside chat’ with Lt. Gen. John Thompson of Space Command on Friday, Musk told a room packed with fighter pilots that “the fighter jet era has passed”. Now, Musk is
renowned for finding soundbites for the media to hook onto, and also of causing calculated disruption in his target audience, but even by his standards this is a well-
baited line. What makes this such a bold and controversial statement?
The contemporary battlefield is far more networked, complex, congested and lethal – it’s those that adapt that will survive. Simply being ‘a good stick’ will rapidly no
longer cut it in the hostile skies of the future.
Musk’s main thrust is, however, that the human in the cockpit is the limiting factor in air combat now, not the advantage.
Firstly, humans are awkward for aircraft designers to accommodate. In a modern fighter the pilot sits on an ejection seat for emergency escape, needs an oxygen supply
and air-conditioning to function at extremes of temperature and altitude, and requires flight controls and instruments to fly and fight the machine. All of these add a
significant amount of weight and cost into the aircraft. Moreover, having a transparent canopy to permit the crew to see out does little to improve signature management1
as radar cross section and the chances of reflected sun glint all increase. The human also adds more weight, especially when encumbered with flight kit such as helmet,
NVGs, g-suit and flight planning and survival equipment. Traditionally, pilots sit toward the front of the aircraft to afford them the best view for take-off and landing, as
well as air combat. This has implications on aerodynamics, stealth1 design and weight distribution factors. In short, it can make the designers life a lot simpler if the
pilot/crew are not in the aircraft.
Vocabulary:
1. Signature management and stealth: both terms refer to technology that reduces the likelihood of personnel, aircrafts, missiles, etc. being detected.
Decide if the sentences are true (T) or false (F) according to the text; then mark the right alternative.
I. Designers are challenged to find solutions that enable fitting flying conditions.
II. The human factor is considered an asset when it comes to air combat.
www.tecconcursos.com.br/questoes/1655159
34)
Answer question according to the text.
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
Mark the option that makes an appropriate title for the text.
www.tecconcursos.com.br/questoes/1655161
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
a) individuals surrounded by other participants might be tricked into providing a wrong answer.
b) personal beliefs provide guidance when there’s uncertainty and lack of information.
c) the Autokinectic Effect describes how light can improve people’s perception.
d) the influence of the opinion of others can help us make correct decisions.
www.tecconcursos.com.br/questoes/1655162
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
a) the more people are in control of their choices, the better they can match their preferences.
b) crowd following does not occur when people see that others are mistaken.
c) people might change their beliefs or behavior in order to fit in with the group.
d) people made the wrong choice to avoid disagreement.
www.tecconcursos.com.br/questoes/1655164
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
I. We ought to consider things more slowly and intentionally to make better decisions.
II. When under pressure it might be valid to follow the crowd.
III. Being driven by what other people are doing is a rational process.
www.tecconcursos.com.br/questoes/1655165
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
a) scientists found a definite answer as to why people are compelled to follow the crowd.
b) social impacts are reduced when people conform to the crowd.
c) going your own way eventually leads to bad decisions.
d) following the crowd has pros and cons.
www.tecconcursos.com.br/questoes/1655166
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
The statement that is more closely related to the idea found in paragraph 9 is:
a) Following others will take you away from the correct path.
b) People are drawn to the immediate rewards of a potential choice.
c) Individual decisions tend to be made to the benefit of the group.
d) We’re often balancing between what is best for ourselves and for others too.
www.tecconcursos.com.br/questoes/1655168
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
The following factors can influence people into following the crowd, EXCEPT,
www.tecconcursos.com.br/questoes/1655173
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
a) when making important decisions, it is helpful to stay open to facts and possibilities that we don’t want or like.
b) our reasoning about an issue may lead us to pursue a deeper understanding of who we are.
c) we often automatically accept things as “true” before we carefully deliberate about them.
d) gathering every fact and carefully thinking through every decision is impossible.
www.tecconcursos.com.br/questoes/1473423
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1473427
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1473429
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1474176
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1474179
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1474180
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1474182
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1474187
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1474189
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object (approximately
50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London, millions of
people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
www.tecconcursos.com.br/questoes/1260473
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
Cancer is
www.tecconcursos.com.br/questoes/1260475
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
www.tecconcursos.com.br/questoes/1260477
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
The text
www.tecconcursos.com.br/questoes/1260478
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
The passage “patient's musical behaviour conforms to their general behaviour” suggests that
www.tecconcursos.com.br/questoes/1260481
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
www.tecconcursos.com.br/questoes/1260482
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
www.tecconcursos.com.br/questoes/1260485
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
www.tecconcursos.com.br/questoes/1260486
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
Read the statement based on paragraph 8 and mark the action that happened first. A study discovered that receptive music therapy had decreased anxiety and stress
levels before, during and after surgeries. Also, music therapy can be applied to different levels of the disease.
a) Discover.
b) Decrease.
c) Can.
d) Apply.
www.tecconcursos.com.br/questoes/1260488
59) TEXT
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
www.tecconcursos.com.br/questoes/1264684
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
c) A psychological study states that people have been intrigued by the question whether we are fiends or not.
www.tecconcursos.com.br/questoes/1264685
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
“ [F] there's much more to our continued interest in supervillains than meets the eye.”
www.tecconcursos.com.br/questoes/1264688
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
a) the conscious knowledge that supervillains reinforce things we value (Pavlov and Skinner).
b) the negative side people need to hide to grow as human beings (Carl Jung).
www.tecconcursos.com.br/questoes/1264695
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
But is our fascination with fantastic fiends 1 healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
dwells 2 underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes 3. But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress
like Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as
villains get to go wild. Supervillainy feels liberating.
Power: Maybe you envy the power these evil characters wield 4. While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
Watchmen and Kingdom Come show how heroes become menaces 5 when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
world's finest heroes seem like overpowered brutes nabbing thugs 6 unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
the occasion, overcome great odds 7 and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
enlightened without literally traipsing into mob hangouts 8 and poorly lit alleyways 9.
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
that baffles 10 us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
In the paragraph “Better villain equals better hero” , the author DOESN’T
a) make a comparison.
d) describe characters.
www.tecconcursos.com.br/questoes/1264697
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress
like Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as
villains get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
Read the statements below and mark the option that contains the correct ones according to the text.
www.tecconcursos.com.br/questoes/1264698
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
But is our fascination with fantastic fiends 1 healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
dwells 2 underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes 3. But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress
like Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as
villains get to go wild. Supervillainy feels liberating.
Power: Maybe you envy the power these evil characters wield 4. While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
Watchmen and Kingdom Come show how heroes become menaces 5 when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
world's finest heroes seem like overpowered brutes nabbing thugs 6 unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
the occasion, overcome great odds 7 and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
enlightened without literally traipsing into mob hangouts 8 and poorly lit alleyways 9.
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
that baffles 10 us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
One of the statements below LACKS the content of the text. Mark it.
a) Our curiosity makes us wonder what we’ve already understood.
b) According to the author’s perspective on superheroes, they can be both heroes and fiends.
www.tecconcursos.com.br/questoes/1264699
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress
like Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as
villains get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
a) superheroes have been the supporting cast, not the stars anymore.
c) supervillains are created to prove how heroic and powerful the superheroes are.
www.tecconcursos.com.br/questoes/1278116
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
The text
a) points out how well Burkina Faso dealt with food shortage.
d) states that land degradation is a natural impact for today’s climatic stability.
www.tecconcursos.com.br/questoes/1278117
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
b) If one applies the required solutions one solves food shortage problem.
www.tecconcursos.com.br/questoes/1278118
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
The first paragraph states that crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Such problems may
represent respectively
a) urban, economic and social factors.
www.tecconcursos.com.br/questoes/1278119
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
www.tecconcursos.com.br/questoes/1655163
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
www.tecconcursos.com.br/questoes/1655167
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
www.tecconcursos.com.br/questoes/1655171
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
In the text, the phrasal verb that means have a harmonious and friendly relationship is
a) look to.
b) get along.
c) look beyond.
d) figure out.
www.tecconcursos.com.br/questoes/1655172
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
d) dislocation
www.tecconcursos.com.br/questoes/1655174
TEXT
When making a decision, it is a common impulse to look and see what others are doing. Nevertheless, it is often unclear whether the path that everyone else may be
following is good for us as well. After all, sometimes following the crowd has merit - at other times, it is simply peer pressure blinding us.
The phenomenon of looking to others and following the crowd has been studied by social science for a long time. Nevertheless, those findings do not always make their
way to individual decision-makers. Therefore, let’s review why people conform to the crowd – and under what conditions it is a god idea to go your own way instead.
To start, individuals tend to look to the opinions of others, especially when they are unsure and lack information from other sources. This dynamic was supported by
classic research from Sherif (1937), who explored how a person’s perception of a very ambiguous stimuli can be influenced by the opinion of others. Sherif (1937) asked
participants to watch a small light in a dark and featureless room and evaluate how much that light moved around. In actuality, however, the light never moved at all –
but the way our perception works in that situation gives the possible illusion of movement (called the Autokinetic Effect). In this uncertain and ambiguous perceptual
situation, Sherif (1937) found that individuals were quite susceptible to the influence of the opinions of others when trying to decide how much light was “moving”.
Unfortunately, this phenomenon also extends to individuals following the crowd, even when they can clearly see that others are wrong. This was first evaluated by Asch
(1955), who asked participants to pick a line from a few choices of varying lengths that matched up with another example line given to them. From a perceptual
standpoint, the task was easy – as the correct choice of which lines were actually similar to one another was clear. Nevertheless, when participants were surrounded by
other individuals giving the wrong answer, they often conformed and made the wrong choice as well. Thus, even when the correct choice is clear, and what others are
doing is wrong, that peer pressure can still cause us to doubt ourselves and follow the crowd.
Why is it that we are so compelled to follow the crowd, even when it is objectively clear that they are wrong? According to more recent research, we may simply be wired
that way. Specifically, these social influences can actually change our perceptions and memories (Edelson, Sharot, Dolan, & Dudai, 2011). Therefore, rather than
knowingly making the wrong choice just to conform to peer pressure, the influence of others may actually change what we see as the correct choice in the moment and
remember as the right thing after the fact. Beyond that, we might just have “herding brains” with built-in components that monitor our social alignments and make us feel
good when we follow the crowd too (Shamay-Tsoory, Saporta, Marton-Alper, & Gvirts, 2019).
Fortunately, this effect has good points as well. In many cases, group decision-making can help individuals look beyond their own private perspectives and make more
rational decisions (Fahr & Irlenbusch, 2011). Furthermore, pro-social and altruistic behaviors can be influenced and shared through such conformity as well (Nook, Ong,
Morelli, Mitchell, & Zaki, 2016). Therefore, sometimes following the crowd helps people get along and make better decisions too.
Given the above, when making a decision, it is important to consider whether following others is a good idea – or is leading you astray instead. Some simple steps can
help you figure it out.
Getting swept away by what everyone else is doing is often an emotional and thoughtless process. We are conforming simply because we have not given sufficient
attention and effort toward considering any other options. Therefore, unless you are in an emergency situation and need to immediately follow everyone else toward the
nearest exit, it might be a good idea to switch to more deliberate thinking processes, rather than just going with your initial reaction.
Some choices and decision-making situations are more individual, while others are more social. Therefore, it is important to consider the specific situation. Is this an
individual choice, or does it involve others? If you have sufficient information to make a clear choice on your own, and you do not need group approval, then you might
want to make up your own mind. If you are personally unsure, or you need the support of others to make something happen, then taking the opinion of others into
consideration might be a good idea instead.
It is generally a good idea to evaluate your choices and decisions from multiple perspectives. The same is true for following the opinion of others too. Although it might
not feel that way at times, especially in the modern day of media coverage and social networking, everyone is not doing it – whatever “it” is that you are considering.
Given that, before you follow the advice or choices of any particular group of people, it might be a good idea to look at what other groups of people are doing or choosing
too. In addition, we can learn a lot from people making choices contrary to ourselves or our preferred group, particularly about potential down-sides to choices we might
not be seeing. Therefore, if you do need to look to others to help provide information regarding a particular choice or decision, then it might help to seek out people with
a few different opinions, weigh your options among them, and figure out what will work best for you.
www.tecconcursos.com.br/questoes/1473426
It weighted about 10,000 tons, entered the atmosphere at a speed of 64,000 km/h and exploded over a city with a blast of 500 kilotons. But on 15 February 2013, we
were lucky. The metereorite that showered pieces of rock over Chelyabinsk, Russia, was relatively small, at only about 17 metres wide. Although many people were
injured by falling glass, the damage was nothing compared to what had happened in Siberia nearly one hundred years ago, when a relatively small object
(approximately 50 metres in diameter) exploded in mid-air over a forest region, flattening about 80 million trees. If it had exploded over a city such as Moscow or London,
millions of people would have been killed.
By a strange coincidence, the same day that the meteorite terrified the people of Chelyabinsk, another 50m-wide asteroid passed relatively close to Earth. Scientists were
expecting that visit and know that the asteroid will return to fly close by us in 2046, but the Russian meteorite earlier in the day had been too small for anyone to spot.
Most scientists agree that comets and asteroids pose the biggest natural threat to human existence. It was probably a large asteroid or comet colliding with Earth which
wiped out the dinosaurs about 65 million years ago. An enormous object, 10 to 16 km in diameter, struck the Yucatan region in Mexico with the force of 100 megatons.
That is the equivalent of one Hiroshima bomb for every person alive on Earth today.
Many scientists, including the late Stephen Hawking, say that any comet or asteroid greater than 20km in diameter that hits Earth will result in the complete destruction
of complex life, including all animals and most plants. As we have seen even a much smaller asteroid can cause great damage.
The Earth has been kept fairly safe for the last 65 million years by good fortune and the massive gravitational field of the planet Jupiter. Our cosmic guardian, with its
stable circular orbit far from the sun, sweeps up and scatters away most of the dangerous comets and asteroids which might cross Earth’s orbit.
After the Chelyabinsk meteorite, scientists are now monitoring potential hazards even more carefully but, as far as they know, there is no danger in the foreseeable
future.
• Comet – a ball of rock and ice that sends out a tail of gas and dust behind it. Bright comets only appear in our visible night sky about once every ten years.
• Asteroid – a rock a few feet to several kms in diameter. Unlike comets, asteroids have no tail. Most are to small to cause any damage and burn up in the
atmosphere.
• Meteoroid – part of an asteroid or comet.
• Meteorite – what a meteoroid is called when it hits Earth.
The passage “the damage was nothing compared to what had happened in Siberia nearly one hundred years ago” states that the incident occurred a century
ago.
a) actually
b) precisely
c) approximately
d) exactly
www.tecconcursos.com.br/questoes/1260476
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
b) defining.
c) managing.
d) creating.
www.tecconcursos.com.br/questoes/1260479
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
Mark the alternative that LACKS the correct synonym for the underlined word.
b) Besides analogy, there are further qualities of music that can be beneficial – apart from.
www.tecconcursos.com.br/questoes/1260484
Cancer is the second leading cause of death in the United States, in Germany and in many other industrialized countries. In 2007, about 12 million people were diagnosed
with cancer worldwide with a mortality rate of 7.6 million (American Cancer Society, 2007). In the industrial countries, the most commonly diagnosed cancers in men are
prostate cancer, lung cancer and colorectal cancer. Women are most commonly diagnosed with breast cancer, gastric cancer and lung cancer.
The symptoms of cancer depend on the type of the disease, but there are common symptoms caused by cancer and/or by its medical treatment (e.g., chemotherapy and
radiation). Common physical symptoms are pain, fatigue, sleep disturbances, loss of appetite, nausea (feeling sick, vomiting), dizziness, limited physical activity, hair loss,
a sore mouth/throat and bowel problems. Cancer also often causes psychological problems such as depression, anxiety, mood disturbances, stress, insecurity, grief and
decreased self-esteem. This, in turn, can implicate social consequences. Social isolation can occur due to physical or psychological symptoms (for example, feeling too
tired to meet friends, cutting oneself off due to depressive complaints).
Besides conventional pharmacological treatments of cancer, there are treatments to meet psychological and physical needs of the patient. Psychological consequences of
cancer, such as depression, anxiety or loss of control, can be counteracted by psychotherapy. For example, within cognitive therapy cancer patients may develop coping
strategies to handle the disease. Research indicates that music therapy, which is a form of psychotherapy, can have positive effects on both physiological and
psychological symptoms of cancer patients as well as in acute or palliative situations.
There are several definitions of music therapy. According to the World Federation of Music Therapy (WFMT, 1996), music therapy is: “the use of music and/or its music
elements (sound, rhythm, melody and harmony) by a qualified music therapist, with a client or group, in a process designed to facilitate and promote communication,
relationship, learning, mobilization, expression, organization, and other relevant therapeutic objectives, in order to meet physical, emotional, mental, social and cognitive
needs”.
The Dutch Music Therapy Association (NVCT, 1999) defines music therapy as “a methodological form of assistance in which musical means are used within a therapeutic
relation to manage changes, developments, stabilisation or acceptance on the emotional, behavioural, cognitive, social or on the physical field”.
The assumption is that the patient's musical behaviour conforms to their general behaviour. The starting points are the features of the patient's specific disorder or
disease pattern. There is an analogy between psychological problems and musical behaviour, which means that emotions can be expressed musically. For patients who
have difficulties in expressing emotions, music therapy can be a useful medium. Music therapy might be a useful intervention for breast cancer patients in order to
facilitate and enhance their emotional expressivity. Besides analogy, there are further qualities of music that can be beneficial within therapeutic treatment. One of
these qualities is symbolism: music can symbolize persons, objects, incidents, experiences or memories of daily life. Therefore, music is a reality, which represents another
reality. The symbolism of the musical reality enables the patient to deal safely with the other reality for it evokes memories about persons, objects or incidents. These
associations can be perceived as positive or negative, so they release emotions in the patient.
Music therapy both addresses physical and psychological needs of the patient. Numerous studies indicate that music therapy can be beneficial to both acute cancer
patients and palliative cancer patients in the final stage of disease.
Most research with acute cancer patients receiving chemotherapy, surgery or stem cell transplantation examined the effectiveness of receptive music therapy. Listening to
music during chemotherapy, either played live by the music therapist or from tape has a positive effect on pain perception, relaxation, anxiety and mood. There was also
found a decrease in diastolic blood pressure or heart rate and an improvement in fatigue; insomnia and appetite loss could be significantly decreased in patients older
than 45 years. Further improvements by receptive music therapy were found for physical comfort, vitality, dizziness and tolerability of the chemotherapy. A study with
patients undergoing surgery found that receptive music therapy led to decreased anxiety, stress and relaxation levels before, during and after surgery. Music therapy can
also be applied in palliative situations, for example to patients with terminal cancer who live in hospices.
Studies indicate that music therapy may be beneficial for cancer patients in acute and palliative situations, but the benefits of music therapy for convalescing cancer
patients remain unclear. Whereas music therapy interventions for acute and palliative patients often focus on physiological and psychosomatic symptoms, such as pain
perception and reducing medical side-effects, music therapy with posthospital curative treatment could have its main focus on psychological aspects. A cancer patient is
not free from cancer until five years after the tumour ablation. The patient fears that the cancer has not been defeated. In this stage of the disease, patients frequently
feel insecure, depressive and are emotionally unstable. How to handle irksome and negative emotions is an important issue for many oncology patients. After the difficult
period of the medical treatment, which they often have overcome in a prosaic way by masking emotions, patients often express the wish to become aware of themselves
again. They may wish to grapple with negative emotions due to their disease. Other patients wish to experience positive feelings, such as enjoyment and vitality.
The results indicate that music therapy can also have positive influences on well-being of cancer patients in the post-hospital curative stage as well as they offer valuable
information about patients' needs in this state of treatment and how effects can be dealt with properly.
According to the text, mark the option which contains the meaning for the word “hospice”.
www.tecconcursos.com.br/questoes/1264692
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress like
Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as villains
get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
a) We need to obey rules and regulations that the superpowered characters follow too.
b) We look for supervillainy in order to feel that we can behave the way we want.
www.tecconcursos.com.br/questoes/1264696
TEXT
Why are we fascinated by supervillains? Posing the question is much like asking why evil itself intrigues us, but there's much more to our continued interest in
supervillains than meets the eye.
Not only do Lex Luthor, Dracula and the Red Skull run unconstrained by conventional morality, they exist outside the limits of reality itself. Their evil, even at its most
realistic, retains a touch of the unreal.
1
But is our fascination with fantastic fiends healthy? From a psychological perspective, views vary on what drives our enduring interest in superhuman bad guys.
Shadow confrontation: Psychiatrist Carl Jung believed we need to confront and understand our own hidden nature to grow as human beings. Healthy confrontation with
our shadow selves can unearth new strengths (e.g., Bruce Wayne creating his Dark Knight persona to fight crime), whereas unhealthy attempts at confrontation may
involve dwelling on or unleashing the worst parts of ourselves.
Wish fulfillment: Sigmund Freud viewed human nature as inherently antisocial, biologically driven by the undisciplined id's pleasure principle to get what we want when
we want it – born to be bad but held back by society. Even if the psyche fully develops its ego (source of self-control) and superego (conscience), Freudians say the id still
2
dwells underneath, and it wishes for many selfish things – so it would love to be supervillainous.
Hierarchy of needs: Humanistic psychologist Abraham Maslow held that people who haven't met their most basic needs will have difficulty maturing. If starved for food,
you're unlikely to feel secure. If starved for love and companionship, you'll have trouble building self-esteem. People who dwell on their deficits may envy and resent
others who have more than they do. Some people who are unable to overcome social shortcomings fantasize about obtaining any means, good or bad, to satisfy every
need and greed.
Conditioning: Ivan Pavlov would say we can learn to associate supervillains with other things we value – like entertainment, strength, freedom or the heroes themselves.
Behaviorist B.F. Skinner would likely argue that we can find it reinforcing to watch or read about supervillains, but without knowing what's reinforcing about them, that's a
bit like saying it's rewarding because it's rewarding.
3
Throughout history, humans have been captivated by stories of heroes facing off against superhuman foes . But what specific rewards, needs, wishes and dark dreams
do supervillains satisfy?
Freedom: Superpowered characters enjoy freedoms the rest of us don't. Nobody can arrest Superman unless he lets them (at least not without kryptonite handcuffs). As
much time as supervillains spend locked up, they seem to escape as often as they please, to run unconstrained by rules and regulations. Cosplayers who dress
like Wonder Woman and Captain America can't do any crazy thing that crosses their minds without seeming to mock and insult our heroes, whereas those dressed as
villains get to go wild. Supervillainy feels liberating.
4
Power: Maybe you envy the power these evil characters wield . While that's also a reason to adore superheroes, good guys don't ache to dominate. Stories like
5
Watchmen and Kingdom Come show how heroes become menaces when they try to take over.
So when dreaming of superpowers, maybe you relate to characters who dream of power as well, from the Scarecrow (who controls individuals' fears) to Doctor Doom
(who's perpetually out to dominate the world).
Better villain than victim: Physiologically, anger activates us and feels better than anxiety or fear. One who feels victimized and cannot figure out constructive ways to
stand up, be strong or become heroic might twist the need for self-assertion into destruction. Alternately, a healthy person simply might focus on how all characters assert
themselves in any given story.
Better villain equals better hero: A hero only appears as heroic as the challenge he or she must overcome. Great heroes require great villains. Without supercriminals, the
6
world's finest heroes seem like overpowered brutes nabbing thugs unworthy of them. Through myths, legends and lore across time, we have needed heroes who rise to
7
the occasion, overcome great odds and take down giants.
Facing our fears: Instead of dreading the darkness, you might reduce that dread by shining a light and seeing what's out there. Fiction can help us feel empowered and
8 9
enlightened without literally traipsing into mob hangouts and poorly lit alleyways .
Exploring the unknown: Our need to challenge the unknown has driven the human race to cover the globe. This powerful curiosity makes us wonder about everything
10
that baffles us, including the world's worst fiends. Knowledge is power, or at least feels like it. When gritty details repulse us, exploring evil through the filter of fiction
can help us contemplate humanity's worst without turning away or dwelling almost voyeuristically on real human tragedy. Even when the fiction is about improbable
people doing impossible things, the story's fantastic nature reassures us that this cannot happen – and therefore we don't have to turn away.
In the end, our interest in supervillains can be healthy or unhealthy. Even the more maladaptive reasons for such fascination tend to arise from motivations that were
originally healthy and natural – frustrated drives that went the wrong way.
Remember, though, that superheroic fiction ultimately begins and ends with the heroes. Comic book writers and artists create supervillains, who move in and out as guest
stars and supporting cast, first and foremost to reveal how heroic the comics' stars can be.
Glossary:
1. fiend – an evil and cruel person
2. to dwell – remain
3. foe – an enemy
4. to wield – influence, use power
5. menace – threat
6. to nab thugs – arrest criminals
7. odds – probability
8. to traipse into mob hangouts – walk among places where gangs, criminals meet
9. poorly lit alleyways – narrow road or path with little light
10. to baffle – confuse somebody completely
In the sentence “ when gritty details repulse us [...]” , the underlined word means
a) harmful.
b) unknown.
d) impossible.
www.tecconcursos.com.br/questoes/1278121
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
www.tecconcursos.com.br/questoes/1278122
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
The sentence “the change in climate is majorly caused by human activities” means that
www.tecconcursos.com.br/questoes/1278124
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
In the sentence “the change in climate is majorly caused by human activities” , the highlighted word means
a) on the average.
b) basically.
c) unlikely.
d) up to a great extent.
www.tecconcursos.com.br/questoes/1278125
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
Mark the option which best shows the meaning of the highlighted expression in “deforestation of tropical forest due to human pressure” .
a) Owed by.
b) Arranged for.
c) Caused by.
d) Deserved by.
www.tecconcursos.com.br/questoes/1278129
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
In the sentence “land degradation due to increased human activities has impacted negatively on agricultural production” it is INCORRECT to state that
a) the adverb ‘negatively’ suggests the idea of something with unsatisfactory results.
b) no change of meaning happens if the expression ‘due to’ is replaced by ‘because of’.
c) the time tense of the sentence refers to a past situation which has no relation with the present moment.
d) ‘land degradation’ can be defined as the result of several actions that worsened the quality of the soil.
www.tecconcursos.com.br/questoes/1278131
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
In “poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers” , the underlined word means
a) poverty situation.
b) developing nations.
d) most farmers.
www.tecconcursos.com.br/questoes/1278134
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing nations.
Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations resulting in
reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
The sentence “recent global financial crisis have led to increase in food prices and reduced investments in agriculture” states that
a) investments in agriculture have increased as much as food prices after the recent global crisis.
b) food prices are getting lower than investments in agriculture because of the recent global financial crisis.
c) investments in agriculture are getting more and more common since food prices are getting higher.
d) food prices are getting higher and investments in agriculture are getting lower due to the recent global financial crisis.
www.tecconcursos.com.br/questoes/1278136
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing
nations. Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations
resulting in reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
Starvation, malnutrition, increased mortality and political unrest are mentioned in the text as examples of
a) food shortage negative impacts.
www.tecconcursos.com.br/questoes/1278137
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing
nations. Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations
resulting in reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
d) climatic changes.
www.tecconcursos.com.br/questoes/1278140
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing
nations. Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations
resulting in reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
www.tecconcursos.com.br/questoes/1278142
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing
nations. Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations
resulting in reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries, because they
don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in order to stabilize
green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to engage in projects
aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
c) geothermal power.
www.tecconcursos.com.br/questoes/1278143
TEXT
Food shortage is a serious problem facing the world and is prevalent in sub-Saharan Africa. The scarcity of food is caused by economic, environmental and social factors
such as crop failure, overpopulation and poor government policies are the main cause of food scarcity in most countries. Environmental factors determine the kind of
crops to be produced in a given place, economic factors determine the buying and production capacity and socio-political factors determine distribution of food to the
masses. Food shortage has far reaching long and short term negative impacts which include starvation, malnutrition, increased mortality and political unrest1. There is
need to collectively address the issue of food insecurity using both emergency and long term measures.
There are a number of social factors causing food shortages. The rate of population increase is higher than increase in food production. The world is consuming more
than it is producing, leading to decline in food stock and storage level and increased food prices due to soaring2 demand. Increased population has led to clearing of
agricultural land for human settlement reducing agricultural production (Kamdor, 2007). Overcrowding of population in a given place results in urbanization of previously
rich agricultural fields. Destruction of forests for human settlement, particularly tropical rain forest has led to climatic changes, such as prolonged droughts and
desertification. Population increase means more pollution as people use more fuel in cars, industry, domestic cooking. The resultant effect is increased air and water
pollution which affect the climate and food production.
Environmental factors have greatly contributed to food shortage. Climatic change has reduced agricultural production. The change in climate is majorly caused by human
activities and to some small extent natural activities. Increased combustion of fossil fuels due to increasing population through power plant, motor transport and mining of
coal and oil emits green house gases which have continued to affect world climate. Deforestation of tropical forest due to human pressure has changed climatic patterns
and rainfall seasons, and led to desertification which cannot support a crop production. Land degradation due to increased human activities has impacted negatively on
agricultural production (Kamdor, 2007). Natural disasters such as floods, tropical storms and prolonged droughts are on the increase and have devastating impacts on
food security particularly in developing countries. There are several economic factors that contribute to food shortage. Economic factors affect the ability of farmers to
engage in agricultural production. Poverty situation in developing nations have reduced their capacity to produce food, as most farmers cannot afford seed and fertilizers.
They use poor farming methods that cannot yield3 enough, even substantial use. Investments in agricultural research and developing are very low in developing
nations. Recent global financial crisis have led to increase in food prices and reduced investments in agriculture by individuals and governments in developed nations
resulting in reduced food production.
There are a number of short term effects of food shortage. The impact on children, mothers and elderly are very evident as seen in malnutrition and hunger related
deaths. Children succumb to hunger within short period as they cannot stand long period of starvation and they die even before the arrival of emergency assistance.
There are also long term effects of food shortage. These include increase in the price of food as a result demand and supply forces. Increasing cost of food production
due to the increase in fuel prices coupled with persistent drought in grain producing regions has contributed to the increase in the price of food in the world. Increase in
oil price led to increase in the price of fertilizers, transportation of food and also industrial agriculture. Increasing food prices culminated in political instability and social
unrest in several nations across the globe in 2007, in countries of Mexico, Cameroon, Brazil, Burkina Faso, Pakistan, Egypt and Bangladesh among other nations (Kamdor,
2007).
There are some solutions to the problem of food shortage. There is need to reduce production of carbon emissions and pollution to reduce the resultant climatic change
through concerted and individual efforts. There is need to invest in clean energy such as solar, nuclear, and geothermal power in homes and industries,
because they don’t have adverse effects on the environment (Kamdor, 2007). Rich nations should help poor nations to develop and use clean and renewable energy in
order to stabilize green house emissions into the atmosphere (Watson, nd). Government need to work in consultation with climatic bodies, World Bank and the UN to
engage in projects aimed at promoting green environment.
Conclusion
Causes of food shortage are well known and can be solved if appropriate measures to solve the problem are taken and effectively implemented. Environmental causes of
4
food shortages are changes in climatic and pollution due to human activities such overgrazing and deforestation which can be controlled through legislation.
Glossary:
1. unrest – disagreement or fighting between different
groups of people
2. soaring – something that increases rapidly above the
usual level
3. yield – to supply or produce something such as profit or
an amount or food
4. overgrazing – excessive use of land where animals feed
on grass
www.tecconcursos.com.br/questoes/1275751
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
The text
www.tecconcursos.com.br/questoes/1275754
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
In the sentence “there was a single kind that could be measured by standardized tests”, it is possible to find an option to substitute the pronoun accordingly in
a) when.
b) which.
c) how.
d) whom.
www.tecconcursos.com.br/questoes/1275759
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
In the fragment “why the former is not the latter”, the highlighted words refer to
www.tecconcursos.com.br/questoes/1275763
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
In the sentence “it’s been 30 years since I developed the notion of ‘multiple intelligences’” , the contraction refers to
a) It has.
b) It been.
c) It is.
d) It was.
www.tecconcursos.com.br/questoes/1275769
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Mark the option which shows the appropriate question tag for the sentence “one unanticipated consequence has driven me to distraction”.
c) Has it?
d) Hasn’t it?
www.tecconcursos.com.br/questoes/1275774
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Mark the option that shows synonyms for the underlined expressions in “it’s high time to relieve my pain and to set the record straight” .
a) An important brake / to register.
www.tecconcursos.com.br/questoes/1275777
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Choose the best option to change the sentence “human capacities are represented in the brain”, into the active form.
a) has represented
b) represents
d) representing
www.tecconcursos.com.br/questoes/1275779
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
b) MI theory believes that instead of a central computer mastering various sectors, there are a larger amount of them relatively autonomous.
c) MI theory estimates the existence of a central computer responsible for 7 to 10 distinct intelligences.
www.tecconcursos.com.br/questoes/1275781
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Mark the alternative in which the problems described in paragraphs 6 and 7 are correctly summarized.
a) The idea of teaching distinct leaning styles and their consistence were questionable concepts when researches started.
b) Educational researchers have found that an impulsive learning style causes problems in its outcomes.
c) There are proofs that different learning styles exist and produce positive results.
d) The notion of learning styles and the outcomes observed when teaching based on them need further studies.
www.tecconcursos.com.br/questoes/1275784
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Mark the option that contains the correct negative form for the sentence “researchers have tried to identify learning styles”.
a) Researchers have tried to not identify learning styles.
www.tecconcursos.com.br/questoes/1275788
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
www.tecconcursos.com.br/questoes/1275793
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Mark the option which shows the appropriate plural form for the word “phenomenon”.
a) Phenomenae.
b) Phenomena.
c) Phenomenons.
d) Phenomenos.
www.tecconcursos.com.br/questoes/1275798
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
www.tecconcursos.com.br/questoes/1275801
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
Choose the option that shows the indirect speech form for “These distinctions are consequential.”
Gardner
www.tecconcursos.com.br/questoes/1275804
TEXT
Howard Gardner: ‘Multiple intelligences’ are not ‘learning styles’ by Valerie Strauss
The fields of psychology and education were revolutionized 30 years ago when we now worldrenowned psychologist Howard Gardner published his 1983 book Frames of
Mind: The Theory of Multiple Intelligences, which detailed a new model of human intelligence that went beyond the traditional view that there was a single kind that
could be measured by standardized tests.
Gardner’s theory initially listed seven intelligences which work together: linguistic, logical-mathematical, musical, bodily-kinesthetic, interpersonal and intrapersonal; he
later added an eighth, naturalist intelligence and says there may be a few more. The theory became highly popular with K-12 educators1 around the world seeking ways
to reach students who did not respond to traditional approaches, but over time, ‘multiple intelligences’ somehow became synonymous with the concept of ‘learning styles’.
In this important post, Gardner explains why the former is not the latter.
It’s been 30 years since I developed the notion of ‘multiple intelligences’. I have been gratified by the interest shown in this idea and the ways it’s been used in schools,
museums, and business around the world. But one unanticipated consequence has driven me to distraction and that’s the tendency of many people, including persons
whom I cherish, to credit me with the notion of ‘learning styles’ or to collapse ‘multiple intelligences’ with ‘learning styles’. It’s high time to relieve my pain and to set the
record straight.
First a word about ‘MI theory’. On the basis of research in several disciplines, including the study of how human capacities are represented in the brain, I developed the
idea that each of us has a number of relatively independent mental faculties, which can be termed our ‘multiple intelligences’. The basic idea is simplicity itself. A belief in
a single intelligence assumes that we have one central, all-purpose computer, and it determines how well we perform in every sector of life. In contrast, a belief in
multiple intelligences assumes that human beings have 7 to 10 distinct intelligences.
Even before I spoke and wrote about ‘MI’, the term ‘learning styles’ was being bandied about in educational circles. The idea, reasonable enough on the surface, is that all
children (indeed all of us) have distinctive minds and personalities. Accordingly, it makes sense to find out about learners and to teach and nurture them in ways that are
appropriate, that they value, and above all, are effective.
Two problems: first, the notion of ‘learning styles’ is itself not coherent. Those who use this term do not define the criteria for a style, nor where styles come from, how
they are recognized/ assessed/ exploited. Say that Johnny is said to have a learning style that is ‘impulsive’. Does that mean that Johnny is ‘impulsive’ about everything?
How do we know this? What does this imply about teaching? Should we teach ‘impulsively’, or should we compensate by ‘teaching reflectively’? What of learning style is
‘right-brained’ or visual or tactile? Same issues apply.
Problem #2: when researchers have tried to identify learning styles, teach consistently with those styles, and examine outcomes, there is not persuasive evidence that the
learning style analysis produces more effective outcomes than a ‘one size fits all approach’. Of course, the learning style analysis might have been inadequate. Or even if it
is on the mark, the fact that one intervention did not work does not mean that the concept of learning styles is fatally imperfect; another intervention might have proved
effective. Absence of evidence does not prove non-existence of a phenomenon; it signals to educational researchers: ‘back to the drawing boards’.
Here’s my considered judgment about the best way to analyze this lexical terrain:
Intelligence: We all have the multiple intelligences. But we signed out, as a strong intelligence, an area where the person has considerable computational power.
Style or learning style: A hypothesis of how an individual approaches the range of materials. If an individual has a ‘reflective style’, he/she is hypothesized to be reflective
about the full range of materials. We cannot assume that reflectiveness in writing necessarily signals reflectiveness in one’s interaction with the others.
Senses: Sometimes people speak about a ‘visual’ learner or an ‘auditory’ learner. The implication is that some people learn through their eyes, others through their ears.
This notion is incoherent. Both spatial information and reading occur with the eyes, but they make use of entirely different cognitive faculties. What matters is the power
of the mental computer, the intelligence that acts upon that sensory information once picked up.
These distinctions are consequential. If people want to talk about ‘an impulsive style’ or a ‘visual learner’, that’s their prerogative. But they should recognize that these
labels may be unhelpful, at best, and ill-conceived at worst.
In contrast, there is strong evidence that human beings have a range of intelligences and that strength (or weakness) in one intelligence does not predict strength (or
weakness) in any other intelligences. All of us exhibit jagged profiles of intelligences. There are common sense ways of assessing our own intelligences, and even if it
seems appropriate, we can take a more formal test battery. And then, as teachers, parents, or selfassessors, we can decide how best to make use of this information.
Glossary:
1. K-12 educators defend the adoption of an interdisciplinary curriculum and methods for teaching with objects.
d) knowing intelligences are many, one becomes able to use them as needed.
www.tecconcursos.com.br/questoes/1274726
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
“In Russia, one typically bestows very few people the status of ‘friend’ ” means that
a) if you go to Russia you won't find a best friend.
c) you may have lots of close friends in Russia, but not all are trustful.
www.tecconcursos.com.br/questoes/1274728
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
Mark the option that shows the same meaning as in “Americans have no close confidants”.
a) Americans do have not any close confidants.
www.tecconcursos.com.br/questoes/1274730
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
www.tecconcursos.com.br/questoes/1274732
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
c) post-secondary-education friendships.
www.tecconcursos.com.br/questoes/1274736
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
Mark the option that is closest in meaning to “Unfortunately making friends seems to trouble many of people”.
a) Unfortunately making friends seems to annoy many of people.
www.tecconcursos.com.br/questoes/1274738
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
Mark the option which shows the same meaning as in “Americans' dependence on family”.
a) The family's dependence on Americans'.
www.tecconcursos.com.br/questoes/1274739
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
www.tecconcursos.com.br/questoes/1274740
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
a) the quality of American close friendships have been decreasing since the 1980's despite its quantity.
c) real friendships have been decreasing in some countries such as Russia and Asia.
d) in Middle East and Central Asia, friends born out of respect, fear and superiority standards.
www.tecconcursos.com.br/questoes/1274742
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
“This does not mean that they are not able to form friendships, however”. The option that replaces the highlighted expression is
a) so.
b) though.
c) thus.
d) most likely.
www.tecconcursos.com.br/questoes/1274744
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
Choose the best option to complete the active form of the sentence: “The study of friendship is included in the fields of sociology, social psychology, anthropology,
philosophy, and zoology” .
The fields of sociology, social psychology, anthropology, philosophy, and zoology ______________ the study of friendship.
a) Include
b) have included
c) are including
www.tecconcursos.com.br/questoes/1274748
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
Choose the option which shows the same kind of comparison in the underlined adjective in “friendship is considered to be closer than association” .
a) Americans have no best friends.
www.tecconcursos.com.br/questoes/1274752
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
“Nonetheless we all have every single type in our lives”. The option that contains a synonym for the underlined expression is
a) nevertheless.
b) due to.
c) therefore.
d) although.
www.tecconcursos.com.br/questoes/1274754
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
Choose the option that shows the sentence “good friends encourage their friends to seek help and access services” in the indirect speech form.
a) The text told good friends encourage their friends to seek help and access services.
b) The text said us that good friends encourage their friends to seek help and access services.
c) The text told that good friends encourage their friends to seek help and access services.
d) The text said that good friends encouraged their friends to seek help and access services.
www.tecconcursos.com.br/questoes/1274758
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
“Good friends enhance their friend's coping skills in dealing with illness and other health problems” . The highlighted word has the same meaning as in
a) engrave.
b) entreat.
c) enlighten.
d) enlist.
www.tecconcursos.com.br/questoes/1274759
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
www.tecconcursos.com.br/questoes/1274762
TEXT
Everyone has at least one best friend, some maybe even more. There are also those people who are just friends and also arch-enemies. People may think that just
because they are your friends it means that they are your best friend. The thing is, even though they are your friend, the relationship between a best friend and a friend
is different. Either way regardless of archenemies, friends or best friends, there are not many ways to compare any of these different types of friends, but you can easily
contrast them from one another.
Arch-enemies often know more about each other than two friends. In a comparison of personal relationships, friendship is considered to be closer than association,
although a wide range of degrees of intimacy exists in friendships, arch-enemies, and associations. Friendship and association can be thought of as spanning across the
same continuum. The study of friendship is included in the fields of sociology, social psychology, anthropology, philosophy, and zoology. Even animals have familiars!
Various academic theories of friendship have been proposed, among which are social exchange theory, equity theory, relational dialectics, and attachment styles. In
Russia, one typically bestows very few people the status of “friend”.
These friendships, however, make up in intensity what they lack in number. Friends are entitled to call each other by their first names alone, and to use diminutives. A
customary example of polite behavior is addressing "acquaintances" by full first name plus their patronymic. These could include relationships which elsewhere would be
qualified as real friendships, such as workplace relationships of long standing, or neighbors with whom one shares an occasional meal or a social drink with.
Also in the Middle East and Central Asia, male friendships, while less restricted than in Russia, tend to be reserved and respectable in nature. They may use nicknames
and diminutive forms of their first names. In countries like India, it is believed in some parts that friendship is a form of respect, not born out of fear or superiority.
Friends are people who are equal in most standards, but still respect each other regardless of their attributes or shortcomings. Most of the countries previously mentioned
(Russia, Asia, and even the Middle East) and even our own nation are suffering a decline in genuine friendships.
According to a study documented in the June 2006 issue of the Journal American Sociological Review, Americans are thought to be suffering a loss in the quality and
quantity of close friendships since at least 1985. The study’s results state that twenty-five percent of Americans have no close confidants, and the average total number of
confidants per citizen has dropped from four to two. According to the study, Americans' dependence on family as a safety net went up from fiftyseven percent to eighty
percent; Americans dependence on a partner or spouse went up from five percent to nine percent.
Recent studies have found a link between fewer friendships, especially in quality, and psychological and physiological regression. In the sequence of the emotional
development of the individual, friendships come after parental bonding and before the pair bonding engaged in at the approach of maturity. In the intervening period
between the end of early childhood and the onset of full adulthood, friendships are often the most important relationships in the emotional life of the adolescent, and are
often more intense than relationships experienced later in life.
Unfortunately, making friends seems to trouble many of people. Having no friends can be emotionally damaging for all ages, from young children to full grown adults. A
study performed by researchers from Purdue University found that post-secondary-education friendships, college and university last longer than the friendships before it.
Children with Asperger syndrome and autism usually have some difficulty forming friendships. Socially crippling conditions like these are just one way that the social world
is so difficult to thrive in. This does not mean that they are not able to form friendships, however. With time, moderation and proper instruction, they are able to form
friendships after realizing their own strengths and weaknesses.
There is a number of theories that attempt to explain the link, including that; Good friends encourage their friends to lead more healthy lifestyles; Good friends encourage
their friends to seek help and access services, when needed; Good friends enhance their friend’s coping skills in dealing with illness and other health problems; and/or
Good friends actually affect physiological pathways that are protective of health. Regardless of what we think, we can clearly see that there are some ways that friends,
best friends and archenemies are the same, but in the end they are clearly more different. Nonetheless we all have every single type in our lives.
In the sentence “there is a number of theories that attempt to explain the link”, it is possible to find an option to substitute the pronoun accordingly in
a) when.
b) how.
c) whom.
d) which.
www.tecconcursos.com.br/questoes/1273714
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
the other hand, health care workers, people responsible for our safety, and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
d) human beings are getting worse and worse at performing robot’s tasks.
www.tecconcursos.com.br/questoes/1273715
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
the other hand, health care workers, people responsible for our safety, and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
a) called
b) invented
c) examined
d) protected
www.tecconcursos.com.br/questoes/1273717
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
the other hand, health care workers, people responsible for our safety, and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
c) technology advances.
d) social-intelligence moat.
www.tecconcursos.com.br/questoes/1273718
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
the other hand, health care workers, people responsible for our safety, and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
In the sentence “Hans Moravec was a futurist who pointed out that machine technology copied a savant infant [...]” the pronoun “who” can be replaced, with no change
in meaning, by
a) which.
b) whose.
c) what.
d) that.
www.tecconcursos.com.br/questoes/1273720
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
the other hand, health care workers, people responsible for our safety, and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
d) subjected to automation.
www.tecconcursos.com.br/questoes/1273721
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
the other hand, health care workers, people responsible for our safety, and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
Mark the option closest in meaning to “We don't really have a clue” .
a) We're completely unable to guess what will happen.
c) We're not trying to find out what the right thing to do is.
www.tecconcursos.com.br/questoes/1273737
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
(A)
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries . But this time is different:
nearly half of American jobs today could be automated in "a decade or two". The question is: which half?
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers. Robotic arms are stronger and more
tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage, middle-
skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
(B)
the other hand, health care workers, people responsible for our safety , and
management positions are the least likely to be automated.
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
(C)
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans . In this light, automation doesn't make the
(D)
world worse. Far from it: it creates new opportunities for human creativity .
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier. We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
In the sentence “for the last three decades”, the underlined item was used in the same way as in
www.tecconcursos.com.br/questoes/1273748
TEXT I
It is an invisible force that goes by many names. Computerization. Automation. Artificial intelligence. Technology. Innovation. And, everyone's favorite, ROBOTS.
Whatever name you prefer, some form of it has been stimulating progress and killing jobs — from tailors to paralegals — for centuries. But this time is different: nearly
half of American jobs today could be automated in "a decade or two". The question is: which half?
(C) (D)
Another way of posing the same question is: Where do machines work better than people? Tractors are more powerful than farmers . Robotic arms are stronger
and more tireless than assembly-line workers. But in the past 30 years, software and robots have succeeded replacing a particular kind of occupation: the average-wage,
middle-skill, routine-heavy worker, especially in manufacturing and office administration.
Indeed, it's projected that the next wave of computer progress will continue to endanger human work where it already has: manufacturing, administrative support, retail,
and transportation. Most remaining factory jobs are "likely to diminish over the next decades". Cashiers, counter clerks, and telemarketers are similarly endangered. On
(B)
the other hand, health care workers, people responsible for our safety, and management positions are the least likely to be automated .
We might be on the edge of an innovating moment in robotics and artificial intelligence. Although the past 30 years have reduced the middle, high- and low-skill jobs
have actually increased, as if protected from the invading armies of robots by their own moats. Higher-skill workers have been protected by a kind of social-intelligence
moat. Computers are historically good at executing routines, but they're bad at finding patterns, communicating with people, and making decisions, which is what
managers are paid to do. This is why some people think managers are, for the moment, one of the largest categories immune to the fast wave of AI.
Meanwhile, lower-skill workers have been protected by the Moravec moat. Hans Moravec was a futurist who pointed out that machine technology copied a savant infant:
Machines could do long math equations instantly and beat anybody in chess, but they can't answer a simple question or walk up a flight of stairs. As a result, not skilled
work done by people without much education (like home health care workers, or fast-food attendants) have been saved, too.
In the 19th century, new manufacturing technology replaced what was then skilled labor. In the second half of the 20th century, however, software technology took the
place of median-salaried office work. The first wave showed that machines are better at assembling things. The second showed that machines are better at organizing
things. Now data analytics and self-driving cars suggest they might be better at patternrecognition and driving. So what are we better at?
The safest industries and jobs are dominated by managers, health-care workers, and a super-category that includes education, media, and community service. One
conclusion to draw from this is that humans are, and will always be, superior at working with, and caring for other humans. In this light, automation doesn't make the
world worse. Far from it: it creates new opportunities for human creativity.
But robots are already creeping into diagnostics and surgeries. Schools are already experimenting with software that replaces teaching hours. The fact that some
industries have been safe from automation for the last three decades doesn't guarantee that they'll be safe for the next one.
(A)
It would be anxious enough if we knew exactly which jobs are next in line for automation. The truth is scarier . We don't really have a clue.
Glossary:
savant infant – a child with great knowledge and ability
to assemble – to make something by joining separate parts
to creep – to move slowly, quietly and carefully
Mark the option that contains an adjective in the same form as in “The safest industries and jobs are dominated by managers [...]” .
www.tecconcursos.com.br/questoes/1273763
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
I. Universities and colleges prepare students for jobs that already exist.
II. All jobs of the future will be better paid than today's jobs.
III. The majority of future jobs are still unknown.
a) I, II and III.
b) II.
c) III.
d) I and III.
www.tecconcursos.com.br/questoes/1273764
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
www.tecconcursos.com.br/questoes/1273766
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
b) In the future, old people will need services that have not been given attention yet.
c) Changes in the world today will stimulate the creation of new jobs.
www.tecconcursos.com.br/questoes/1273767
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
“Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.” The underlined word refers to
a) primary complaints.
b) students.
c) jobs.
d) changes.
www.tecconcursos.com.br/questoes/1273771
137)
Directions: Answer question according to TEXT II.
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
“This first section deals with new positions that will likely be developed within the next 10 years.” The underlined word is closest in meaning to
a) never.
b) exactly.
c) mainly.
d) probably.
www.tecconcursos.com.br/questoes/1273778
138)
Directions: Answer question according to TEXT II.
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
(A)
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The
trend will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
(B)
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
(C)
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing
group of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
(D)
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as
we begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
All the following verbs are used in the text in their literal meaning, EXCEPT
a) rise.
b) explode.
c) age .
d) kill .
www.tecconcursos.com.br/questoes/1273780
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
“The jobs and occupations listed above are just scratching the surface.” This sentence means that
a) occupations listed in the text are not really interesting for young workers.
b) the jobs' reality relies much upon the possibility one has to work.
c) new jobs and occupations are becoming real as time goes by.
www.tecconcursos.com.br/questoes/1273781
TEXT II
One of my primary complaints with higher education is that they tend to prepare students for jobs of the past. Similarly, the best paying jobs of the future are all jobs that
currently exist today. Many of them will still exist in the future, but with some changes as technology and communication systems make their impact.
As a rule of thumb, 60% of the jobs 10 years from now haven’t been invented yet. With that in mind, I’ve decided to put together a list of some jobs that will be in high
demand in the future.
Many of the changes we see today will cause new jobs to materialize quickly. This first section deals with new positions that will likely be developed within the next 10
years.
3D printing engineers – Classes in 3D printing are already being introduced into high schools and the demand for printer-produced products will rise quickly. The trend
will be for these worker-less workshops to enter virtually different fields, at the same time, driving the need for competent technicians and engineers to design and
maintain the next wave of this technology.
Nano-medics – Health professionals capable of working on the nano-level, both in designing diagnostics systems, remedies, and monitoring solutions will be in high
demand.
Organ agents – The demand for transplantable organs is exploding and people who can track down and deliver healthy organs will be in hot demand.
Octogenarian service providers – As the population continues to age we will have record numbers of people living into their 80s, 90s, and 100s. This growing group
of active oldsters will provide a demand for goods and services currently not being addressed in today’s marketplace.
A number of technologies currently on the drawing board will require a bit longer lead time before the industry comes into its own. Here are a few examples of these
kinds of jobs:
Body part & limb makers – The organ agents listed before will quickly find themselves out of work as soon as we figure out how to efficiently grow and mass produce
our own organs from scratch.
Earthquake forecasters – While scientists are developing skills to work with nanoscale precision on the earth’s surface, the best we can know about below the surface
is blindfolded guesswork done with 100-mile precision. What we don’t know is literally killing us – over 226,000 killed in 2010 alone. But that will change over time as we
begin to understand the inner working of the earth and accurately forecast when the next big quakes are about to hit.
“Heavy air” engineers – Compressed air is useful in a wide variety of ways. However, we have yet to figure out how to compress streams of air as they pass through
our existing atmosphere. Once we do, it will create untold opportunity for non-surface based housing and transportation system, weather control, and other kinds of
experimentation.
Final thoughts
The jobs and occupations listed above are just scratching the surface. This list is intended to help stretch your imagination and start you down a path of imagining your
own future.
“There is no future in any job. The future lies in the person who holds the job.” – George W. Crane.
b) Crane said there will be no future in any job. The future lay in the person who holds the job.
c) Crane says there is no future in any job. The future lies in the person who held the job.
d) Crane said that there was no future in any job. He also added that the future lay in the person who held the job.
www.tecconcursos.com.br/questoes/1272479
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
b) show that moral rules of behavior concerning wars had long been discussed.
d) influence societies to follow their nature and therefore, justify their warlike behavior when facing future wars.
www.tecconcursos.com.br/questoes/1272487
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
www.tecconcursos.com.br/questoes/1272942
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
www.tecconcursos.com.br/questoes/1272943
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
www.tecconcursos.com.br/questoes/1272944
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
a) a sense of obligation.
www.tecconcursos.com.br/questoes/1272946
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
Choose the alternative in which the determiner ‘neither’ is used with the same meaning as the one in italics in the text.
a) ‘My brother can’t swim. Me neither.’
d) ‘Can you come on Monday or Tuesday?’ ‘I’m afraid neither day is possible.’
www.tecconcursos.com.br/questoes/1272947
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
The verbal construction of the underlined sentence in the text expresses the notion of an action
www.tecconcursos.com.br/questoes/1272948
TEXT I
ETHICS OF WAR
Human beings have been fighting each other since prehistoric times, and people have been discussing the rights and wrongs of it for almost as long.
The Ethics of War starts by assuming that war is a bad thing, and should be avoided if possible, but it recognizes that there can be situations when war may be the lesser
evil of several bad choices.
War is a bad thing because it involves deliberately killing or injuring people, and this is a fundamental wrong – an abuse of the victims human rights.
The purpose of war ethics is to help decide what is right or wrong, both for individuals and countries, and to contribute to debates on public policy, and ultimately to
government and individual action.
War ethics also leads to the creation of formal codes of war (e.g. the Hague and Geneva conventions), the drafting and implementation of rules of engagement for
soldiers, and in the punishment of soldiers and others for war crimes.
The discussion of the ethics of war goes back to the Greeks and Romans, although neither civilization behaved particularly well in war.
In the Christian tradition war ethics were developed by St Augustine, and later by St Thomas Aquinas and others.
Hugo Grotius (1583-1645), a Dutch philosopher and author of De Jure Belli Ac Pacis (The Rights of War and Peace), wrote down the conditions for a just war that are
accepted today.
Cicero argued that there was no acceptable reason for war outside of just revenge or self-defence – in which he included the defence of honor.
He also argued that a war could not be just unless it was publicly declared and unless compensation for the enemy’s offence had first been demanded.
Cicero based his argument on the assumption that nature and human reason influenced a society against war, and that there was a fundamental code of behavior for
nations.
Mark the only sentence below that has the same function of the Modal verb in bold in text.
a) ‘Children under 8 are not allowed to swim here’, the sign says.
c) “May I have your attention?” The principal asked the students before the classes started.
www.tecconcursos.com.br/questoes/1272950
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they _____________(1) in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the
Middle East. Some of the terrorists had lived in the United States for more than a year and _____________(2) flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
Choose the alternative containing the correct verbal tenses to complete the gaps (1) and (2) in the text.
www.tecconcursos.com.br/questoes/1272951
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
c) because the Middle East had lost previous wars against the USA.
www.tecconcursos.com.br/questoes/1272953
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
I. Almost three thousand people were saved in the World Trade Center.
II. The hijackers of the United Flight 93 plane circled over downtown in Washington, D.C.
III. A fire extinguisher is supposed to be the weapon used by the passengers to attack the hijackers.
IV. The North tower was the second giant concrete building to collapse.
a) I and III.
b) I and II.
d) I, II and III.
www.tecconcursos.com.br/questoes/1272954
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
According to the text, “some terrorists had lived in the United States for more than a year [f]”. It means that the terrorists
www.tecconcursos.com.br/questoes/1272955
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
www.tecconcursos.com.br/questoes/1272956
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
a) talked about
b) heard
looked for
c) typed
www.tecconcursos.com.br/questoes/1272957
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
The sentence “Thomas Burnett Jr. told his wife over the phone that ‘I know that we’re all going to die’ ” is similar in meaning to
www.tecconcursos.com.br/questoes/1272959
TEXT II
SEPTEMBER 11
On September 11, 2001, nineteen militants associated with the Islamic extremist group al-Qaeda hijacked four airlines and carried out suicide attacks against targets in
the United States. Two of the planes were flown into the towers of the World Trade Center in New York City, a third plane hit the Pentagon just outside Washington, D.C.,
and the fourth plane crashed in a field in Pennsylvania. Often referred to as 9/11, the attacks resulted in extensive death and destruction, activating major U.S. initiatives
to combat terrorism and defining the presidency of George W. Bush. Over three thousand people were killed during the attacks in New York City and Washington, D.C.,
including more than four hundred police officers and firefighters.
At 8:45 a.m., on a clear Tuesday morning, an American Airlines Boeing 767 loaded with twenty thousand gallons of jet fuel crashed into the north tower of the World
Trade Center in New York City. The impact left a wide, burning hole near the 80th floor of the 110- story skyscraper, instantly killing hundreds of people and trapping
hundreds more in higher floors. Eighteen minutes after the first plane hit, a second Boeing 767– United Airlines Flight 175–appeared out of the sky, turned sharply toward
the World Trade Center and crashed into the south tower near the 60th floor. The collision caused a massive explosion that showered burning fragment over surrounding
buildings and the streets below. America was under attack.
The attackers were Islamic terrorists from Saudi Arabia and several other Arab nations. Reportedly financed by Saudi fugitive Osama bin Laden's al-Qaeda terrorist
organization, they were acting in retaliation for America's support of Israel, its involvement in the Persian Gulf War and its continued military presence in the Middle East.
Some of the terrorists had lived in the United States for more than a year and had taken flying lessons at American commercial flight schools.
As millions watched the events unfolding in New York, American Airlines Flight 77 circled over downtown Washington, D.C., and banged into the west side of the Pentagon
military headquarters at 9:45 a.m. Jet fuel from the Boeing 757 caused a devastating inferno that led to the structural collapse of a portion of the giant concrete building.
Less than fifteen minutes after the terrorists struck the nerve center of the U.S. military, the horror in New York took a catastrophic turn for the worse when the south
tower of the World Trade Center collapsed in a massive cloud of dust and smoke. At 10:30 a.m., the other Trade Center tower collapsed. Close to three thousand people
died in the World Trade Center and its vicinity, including an impressive three hundred and forty three firefighters and paramedics, twenty three New York City police
officers and thirty seven Port Authority police officers who were struggling to complete an evacuation of the buildings and save the office workers trapped* on higher
floors.
Meanwhile, a fourth California-bound plane–United Flight 93–was hijacked about forty minutes after leaving Newark International Airport in New Jersey. Because the
plane had been delayed in taking off, passengers on board learned of events in New York and Washington via cell phone calls to the ground. Knowing that the aircraft was
not returning to an airport as the hijackers claimed, a group of passengers and flight attendants planned a rebellion. One of the passengers, Thomas Burnett Jr., told his
wife over the phone that "I know we're all going to die. There are three of us who are going to do something about it. I love you, honey." Another passenger–Todd
Beamer–was heard saying "Are you guys ready? Let's roll" over an open line.
The passengers fought the four hijackers and are suspected to have attacked the cockpit with a fire extinguisher. The plane then flipped over and sped toward the
ground, crashing in a rural field in western Pennsylvania at 10:10 a.m. All forty-five people aboard were killed. Within two months, U.S. forces had effectively removed the
Taliban from operational power, but the war continued. Osama bin Laden, was finally chased and killed by U.S. forces in Abbottabad, Pakistan.
Glossary:
*Trapped – to be in a bad situation that is difficult to escape.
If the plane hadn’t been delayed in taking off, the passengers ............... about the events in New York and Washington.
c) would know
d) hadn’t known
www.tecconcursos.com.br/questoes/1302797
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
b) The Air Force Academy trains the Army to administer the Brazilian officers.
c) The Academy instructs the Aeronautics Brazilian officers to manage our country.
d) The Brazilian Aviation School forced AFA’s instructors to abandon their military base, creating a new command.
www.tecconcursos.com.br/questoes/1302798
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
Read the statements in order to mark only the correct ones according to the text.
I. The military aviation work had to be controlled by Europe in the beginning of the 1940s because of a war.
II. Because of a war, the government resolved to unify the military aviation operation under a single command.
III. A single officer was chosen to concentrate the military aviation skills.
IV. As the Brazilian government got worried, it was decided to join the military aviation operations due to air European war.
a) III and I.
b) II and IV.
c) I, II and III.
www.tecconcursos.com.br/questoes/1302800
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
b) metaphor that describes the similarity among pilots, aircrafts and wings.
www.tecconcursos.com.br/questoes/1302801
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
Mark the alternative that has the fragment from the text INCORRECTLY changed into Active Voice.
www.tecconcursos.com.br/questoes/1302802
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
We can infer from the text that among the different specializations
a) the future Pilot has to be trained for hours before becoming skilful.
b) the pilot should follow instructions on security techniques and deal with anti-aircraft measures more than the Aeronautics Infantry.
c) the Administrative Officer might have the most advanced training on aircraft of all.
d) Aeronautics Infantry and Pilots ought to obtain more and more instructions on aerobatics.
www.tecconcursos.com.br/questoes/1302803
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
All the options below complete the boldfaced sentence. Mark the one in which the Relative Pronoun is INCORRECTLY used.
b) Rio de Janeiro was the place where this school was located.
c) there were two French aircrafts who were available to the instructions of the students.
www.tecconcursos.com.br/questoes/1302804
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
Read the statements about the informative text and mark the correct option.
I. In the beginning of the last century, Brazilian cadets were sent to the best European schools that provided them instruction.
II. In France, the youngsters had Blériot and Farman aircraft instruction.
III. Brazilian Aviation School had to be closed in 1913.
IV. The Brazilian Aviation School and the Naval Aviation School were created in the same year.
www.tecconcursos.com.br/questoes/1302809
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
a) contrast - result
b) addition - conclusion
c) contrast - addition
d) conclusion - result
www.tecconcursos.com.br/questoes/1302810
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
a) the Brazilian Air Force replaced the Army and Navy air arms.
b) Military and Naval aviation schools were created at Campo dos Afonsos.
c) students from both Military and Naval aviation schools started to be called Aeronautics Cadets.
d) the Air Ministry created the Army and Navy air arms.
www.tecconcursos.com.br/questoes/1302811
AFA (Air Force Academy), located at Pirassununga, State of São Paulo, is responsible for the training of Pilots, Administrative and Aeronautics Infantry Officers for the
Brazilian Air Force.
The history of the Brazilian military pilots schools goes back to 1913, when the Brazilian Aviation School was founded, at Campo dos Afonsos, State of Rio de Janeiro. Its
mission was to provide instruction at similar levels to those of the best European schools at the time; Blériot and Farman aircraft, made in France, were available for the
instruction of the pupils. The Great War 1914-1918, however, forced its instructors to leave and the school was closed.
At that time, both the Brazilian Army and Navy had their own air arms, the Military Aviation and the Naval Aviation. The Navy bought Curtiss F seaplanes in May 1916 to
equip the latter, and in August of the same year, the Naval Aviation School was created.
The Military Aviation, however, only activated its Military Aviation School after the Great War, on 10 July 1919. Among the aircrafts used at the school, one could find the
Sopwith 1A2, Bréguet 14A2, and Spad 7.
Until the beginning of the 1940s, both schools continued with their activities. The Brazilian Government was concerned with the air war in Europe and decided to
concentrate under a single command the military aviation activities. Thus, on 20 January 1941, the Air Ministry was created and both the Army and Navy air arms were
disbanded, their personnel and equipment forming the Brazilian Air Force. On 25 March 1941, the Aeronautics School was based at Campo dos Afonsos, and its students
became known as Aeronautics Cadets from 1943 to the current days.
As early as 1942, it became clear that the Aeronautics School would need to be transferred to another place, offering better climate and little
interference with the flight instruction of the future pilots.
The town of Pirassununga was chosen among others, and, in 1952, the first buildings construction was initiated. The transfer of the School activities to Pirassununga
occurred from 1960 to 1971. The School was redesigned as the Air Force Academy in 1969.
The motto of the Academy is the Latin expression “Macte Animo! Generose Puer, sic itur ad astra”, extracted from the poem Thebaida, by the Roman poet Tatius. It is an
exhortation to the cadets, which can be translated as Courage! This is the way, oh noble youngster, to the stars.
The instruction of the Aeronautics Cadets, during the four-year-long course, has its activities centred in the words COURAGE - LOYALTY - HONOUR - DUTY
- MOTHERLAND. The future officers take courses on several subjects, including Calculus, Computer Science, Mechanics, Portuguese and English, given by civilian
lecturers, Air Force instructors and supervisors. The military instruction itself is given on a daily basis, and the Cadets are trained on different subjects, including
parachuting, and sea and jungle survival.
According to the chosen specialization, the Cadet will receive specific instruction:
Pilots: Instruction on precision maneuvering, aerobatics, formation flying and by instruments, with 75 flying hours on the primary/basic training aircraft T-25 Universal,
beginning on the 2nd term of the 1st year and completed in the 3rd year. Advanced training is given on T-27 Tucano aircraft, with 125 flying hours.
Administrative: Training on the scientific and technological modern foundations of economics and financial management, and logistics training.
Aeronautics Infantry: Instruction on defense and security techniques of military Aeronautics installations, anti-aircraft measures, command of troops and firefighting
teams, military laws and regulations, armament usage, military service and call-up procedures.
During their leisure time, the Cadets participate on the activities of seven different clubs: Aeromodelling, Literature, Informatics, Firearms shooting, Gauchos Heritage (for
those coming from the South of Brazil), Gerais Club and Sail Flying. The clubs are directed by the Cadets themselves, under supervision of Air Force officers.
The Academy also houses the Brazilian Air Force Air Demonstration Squadron - The Smoke Squadron.
The sentence “The Military Aviation [...] activated its Military Aviation School after the Great War [...]” can be rewritten, with the same meaning, as .
a) during the Great War the Military Aviation activated its Military Aviation School.
b) by the time the Military Aviation activated its Military Aviation School, the Great War had already finished.
c) the Great War finished when the Military Aviation activated its Military Aviation School.
d) the Military Aviation activated its Military Aviation School through the Great War.
www.tecconcursos.com.br/questoes/1302815
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
The last two sentences of the second paragraph mean that the interference of bilingualism
a) was considered positive in the past, but nowadays this view has changed.
b) has always been a problem, since the brain has to solve an internal conflict.
d) has proved to increase the disabilities of the brain and reduce the blessings it can have.
www.tecconcursos.com.br/questoes/1302819
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
Considering the context, mark the alternative that contains the correct synonym or explanation to the words from the text.
a) Remarkably – ordinarily, usually.
www.tecconcursos.com.br/questoes/1302821
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
Mark the INCORRECT option. According to the text, recent researches prove that bilingualism
a) causes general cognitive development.
c) prevents people from suffering from problems related to memory and other mental disorders or delay these problems.
www.tecconcursos.com.br/questoes/1302824
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
Mark the option that correctly substitutes the expression rather than .
a) Instead of.
b) As well as.
c) Aside from.
d) In addition to.
www.tecconcursos.com.br/questoes/1302826
171) TEXT II
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
www.tecconcursos.com.br/questoes/1302828
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
www.tecconcursos.com.br/questoes/1302834
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
The relative pronoun THAT can be omitted in all the sentences below, EXCEPT
a) The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function.
b) [...] the bilingual advantage was centered primarily in ability for inhibition that was improved by the exercise of suppressing one language system.
c) [...] there is reason to believe that it may also apply to those who learn a second language later in life. [...]
d) But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
www.tecconcursos.com.br/questoes/1302840
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
One extracted fragment has its correct Tag Question. Mark the item.
a) The bilingual experience appears to influence the brain from infancy to old age, don’t they?
b) Bilingualism’s effects also extend into the twilight years, has it?
www.tecconcursos.com.br/questoes/1302849
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
www.tecconcursos.com.br/questoes/1302850
Speaking two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the
advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can
have a profound effect on your brain, improving cognitive skills not related to language and even protecting from dementia in old age.
This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long
considered a second language to be an interference, cognitively speaking, that delayed a child’s academic and intellectual development. They were not wrong about the
interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in
which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve
internal conflict, giving the mind a workout that strengthens its cognitive muscles.
Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and
Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one
marked with a blue square and the other marked with a red circle. In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with
the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which
was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.
The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that
directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring
distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while
driving.
Why does the fight between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage
was centered primarily in an ability for inhibition that was improved by the exercise of suppressing one language system: this suppression, it was thought, would help
train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals
perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.
The bilingual experience appears to influence the brain from infancy to old age (and there is reason to believe that it may also apply to those who learn a second
language later in life).
In a 2009 study led by Agnes Kovacs of the International School for Advanced Studies in Trieste, Italy, 7-month-old babies exposed to two languages from birth were
compared with peers raised with one language. In an initial set of tests, the infants were presented with an audio stimulus and then shown a puppet on one side of a
screen. Both infant groups learned to look at that side of the screen in anticipation of the puppet. But in a later set of tests, when the puppet began appearing on the
opposite side of the screen, the babies exposed to a bilingual environment quickly learned to switch their anticipatory gaze in the new direction while the other babies did
not.
Bilingualism’s effects also extend into the twilight years. In a recent study of 44 elderly Spanish-English bilinguals, scientists led by the neuropsychologist Tamar Gollan of
the University of California, San Diego, found that individuals with a higher degree of bilingualism — measured through a comparative evaluation of proficiency in each
language — were more resistant than others to the beginning of dementia and other symptoms of Alzheimer’s disease: the higher the degree of bilingualism, the later the
age of occurrence.
Nobody ever doubted the power of language. But who would have imagined that the words we hear and the sentences we speak might be leaving such a deep imprint?
Adapted from
http://www.nytimes.com/2012/03/18/opinion/sunday/the-benefitsof-bilingualism.html
In the question “Why does the fight between two simultaneously active language systems improve these aspects of cognition?” The author asked
a) if the fight between two simultaneously active language systems had improved these aspects of cognition.
b) why does the fight between two simultaneously active language systems improved those aspects of cognition?”
c) why the fight between two simultaneously active language systems improved those aspects of cognition.
d) if the fight between two simultaneously active language systems improve these aspects of cognition?”
www.tecconcursos.com.br/questoes/1270581
Twilight
Twilight is a 2008 American romantic vampire film based ___ Stephenie Meyer’s popular novel of the same name. It is the first film in The Twilight Saga film series. This
film focuses on the development of the relationship between Bella Swan and Edward Cullen (a vampire), and the subsequent efforts of Cullen and his family to keep Swan
safe ___ a coven of evil vampires.
The project was in development for approximately 3 years ___ Paramount Pictures, during which time a screen adaptation that differed significantly from the novel was
written. Principal photography took 44 days and the film was primarily shot in Oregon.
Twilight was theatrically released ___ November 21 2010, grossing over US$392 million worldwide and became the most purchased DVD of the year. The soundtrack
was released in the same year. Following the success of the film, New Moon and Eclipse, the next two novels in the series, were produced as films the following year.
b) about – to – over – at
d) on – from – at – on
www.tecconcursos.com.br/questoes/1270582
Twilight
Twilight is a 2008 American romantic vampire film based on Stephenie Meyer’s popular novel of the same name. It is the first film in The Twilight Saga film series. This
film focuses on the development of the relationship between Bella Swan and Edward Cullen (a vampire), and the subsequent efforts of Cullen and his family to keep Swan
safe from a coven of evil vampires.
The project was in development for approximately 3 years at Paramount Pictures, during which time a screen adaptation that differed significantly from the novel was
written. Principal photography took 44 days and the film was primarily shot in Oregon.
Twilight was theatrically released on November 21 2010, grossing over US$392 million worldwide and became the most purchased DVD of the year. The soundtrack was
released in the same year. Following the success of the film, New Moon and Eclipse, the next two novels in the series, were produced as films the following year.
c) New Moon and Eclipse also became the most purchased DVDs in 2010.
d) the principal character, Cullen, tried to maintain the girl away from others of his species.
www.tecconcursos.com.br/questoes/1270583
Twilight
Twilight is a 2008 American romantic vampire film based on Stephenie Meyer’s popular novel of the same name. It is the first film in The Twilight Saga film series. This
film focuses on the development of the relationship between Bella Swan and Edward Cullen (a vampire), and the subsequent efforts of Cullen and his family to keep Swan
safe from a coven of evil vampires.
The project was in development for approximately 3 years at Paramount Pictures, during which time a screen adaptation that differed significantly from the novel was
written. Principal photography took 44 days and the film was primarily shot in Oregon.
Twilight was theatrically released on November 21 2010, grossing over US$392 million worldwide and became the most purchased DVD of the year. The soundtrack was
released in the same year. Following the success of the film, New Moon and Eclipse, the next two novels in the series, were produced as films the following year.
Considering the boldfaced adverb in the text, mark the sentence in which the underlined word has the function of an adverb.
a) Laura seems to be such a lovely person!
www.tecconcursos.com.br/questoes/1270591
Soundtrack of Twilight
After my dreaming
(Chorus)
When my time comes
Forget the wrong that I’ve done
Help me leave behind some
Reasons to be missed
[...]
Don’t be afraid
I’ve taken my beating
I’ve shared what I made
[...]
Pretending
Someone else can come and save me from myself
I can’t be who you are
Read the chorus of the song and choose the correct alternative.
www.tecconcursos.com.br/questoes/1270593
Soundtrack of Twilight
After my dreaming
I woke with this fear
What am I leaving
When I’m done here
[...]
(Chorus)
When my time comes
Forget the wrong that I’ve done
Help me leave behind some
Reasons to be missed
[...]
Don’t be afraid
I’ve taken my beating
I’ve shared what I made
[...]
Pretending
Someone else can come and save me from myself
I can’t be who you are
The line “I’ve shared what I made” is the answer to one question. Mark it.
www.tecconcursos.com.br/questoes/1270608
Soundtrack of Twilight
After my dreaming
I woke with this fear
What am I leaving
When I’m done here
[...]
(Chorus)
When my time comes
Forget the wrong that I’ve done
Help me leave behind some
Reasons to be missed
[...]
Don’t be afraid
I’ve taken my beating
I’ve shared what I made
[...]
Pretending
Someone else can come and save me from myself
I can’t be who you are
Observe the reflexive pronoun in italics (myself) and then read the sentences below.
Considering the letters A (reflexive), B (emphatic) and C (idiomatic), match the sentences to the letters and choose the correct alternative.
www.tecconcursos.com.br/questoes/1270630
Storyline
In this fiction movie, 84 years later, a 100-year-old woman named Rose DeWitt Bukater tells the story to her granddaughter Lizzy Calvert and others about her life set on
April 10th 1912, on a ship called Titanic when young Rose boards the departing ship with the upper-class passengers, her mother Ruth DeWitt Bukater, and her fiancé.
Meanwhile, a drifter and artist named Jack Dawson and his best friend Fabrizio De Rossi win third-class tickets to the ship in a game. She explains the whole story from
departure until the death of Titanic on its first and last voyage April 15th, 1912 at 2:20 in the morning.
"My Heart Will Go On" is the love theme of the 1997 blockbuster film Titanic. It was recorded by Celine Dion. Originally released in 1997, it went to number 1 all over the
world. It became Dion's biggest hit, and one of the best selling of all time, and was the world's best-selling single of 1998.
Soundtrack of Titanic
http://www.stlyrics.com/lyrics/titanic/myheartwillgoon
After reading both Titanic storyline and soundtrack, we can conclude that
a) Rose and Jack promised each other to be together.
www.tecconcursos.com.br/questoes/1270632
Storyline
In this fiction movie, 84 years later, a 100-year-old woman named Rose DeWitt Bukater tells the story to her granddaughter Lizzy Calvert and others about her life set on
April 10th 1912, on a ship called Titanic when young Rose boards the departing ship with the upper-class passengers, her mother Ruth DeWitt Bukater, and her fiancé.
Meanwhile, a drifter and artist named Jack Dawson and his best friend Fabrizio De Rossi win third-class tickets to the ship in a game. She explains the whole story from
departure until the death of Titanic on its first and last voyage April 15th, 1912 at 2:20 in the morning.
"My Heart Will Go On" is the love theme of the 1997 blockbuster film Titanic. It was recorded by Celine Dion. Originally released in 1997, it went to number 1 all over the
world. It became Dion's biggest hit, and one of the best selling of all time, and was the world's best-selling single of 1998.
Soundtrack of Titanic
http://www.stlyrics.com/lyrics/titanic/myheartwillgoon
a) narrator.
www.tecconcursos.com.br/questoes/1270637
Storyline
In this fiction movie, 84 years later, a 100-year-old woman named Rose DeWitt Bukater tells the story to her granddaughter Lizzy Calvert and others about her life set on
April 10th 1912, on a ship called Titanic when young Rose boards the departing ship with the upper-class passengers, her mother Ruth DeWitt Bukater, and her fiancé.
Meanwhile, a drifter and artist named Jack Dawson and his best friend Fabrizio De Rossi win third-class tickets to the ship in a game. She explains the whole story from
departure until the death of Titanic on its first and last voyage April 15th, 1912 at 2:20 in the morning.
"My Heart Will Go On" is the love theme of the 1997 blockbuster film Titanic. It was recorded by Celine Dion. Originally released in 1997, it went to number 1 all over the
world. It became Dion's biggest hit, and one of the best selling of all time, and was the world's best-selling single of 1998.
Soundtrack of Titanic
http://www.stlyrics.com/lyrics/titanic/myheartwillgoon
www.tecconcursos.com.br/questoes/1270643
www.cartoonstock.com
www.tecconcursos.com.br/questoes/1270655
A stunt double stands in for the actor when the action or fight scene gets dangerous or goes beyond the capabilities of the actor. To become a stunt double, you must be
in excellent physical condition and have special skills.
Instructions
1. Exercise regularly if you want to become a stunt double. Eat nutritiously for optimal health and strength.
2. Take lots of lessons because the more skills you have, the better. Gymnastics is extremely important in becoming a stunt double. Get good at trampoline,
skateboarding, swimming and high board diving. Take scuba diving lessons. Practice rock climbing and horseback riding. Learn to water ski and snow ski.
3. Enroll in martial arts classes, especially judo. Judo is excellent for learning how to break falls.
4. Get training in CP R(1) and First Aid. This training looks good on a résumé, especially for stunt double careers. Injuries happen.
5. Have valid driver's licenses for both car and motorcycle. Take advanced driving classes so you'll be qualified for difficult driving scenes.
6. Move to Hollywood and plan to work your way up from the bottom. You must get into the Screen Actors Guild(2) and have a union card(3).
c) recommends ways to deal with wounds and provides tips to be healthy to be a double.
www.tecconcursos.com.br/questoes/1270660
A stunt double stands in for the actor when the action or fight scene gets dangerous or goes beyond the capabilities of the actor. To become a stunt double, you must be
in excellent physical condition and have special skills.
Instructions
1. Exercise regularly if you want to become a stunt double. Eat nutritiously for optimal health and strength.
2. Take lots of lessons because the more skills you have, the better. Gymnastics is extremely important in becoming a stunt double. Get good at trampoline,
skateboarding, swimming and high board diving. Take scuba diving lessons. Practice rock climbing and horseback riding. Learn to water ski and snow ski.
3. Enroll in martial arts classes, especially judo. Judo is excellent for learning how to break falls.
4. Get training in CP R(1) and First Aid. This training looks good on a résumé, especially for stunt double careers. Injuries happen.
5. Have valid driver's licenses for both car and motorcycle. Take advanced driving classes so you'll be qualified for difficult driving scenes.
6. Move to Hollywood and plan to work your way up from the bottom. You must get into the Screen Actors Guild(2) and have a union card(3).
After reading the first item of the instructions, mark the option that completes the gap in the converted sentence below.
“If you want to become a stunt double you ________ exercise regularly.”
a) had to
b) might
c) could
d) must
www.tecconcursos.com.br/questoes/1270661
A stunt double stands in for the actor when the action or fight scene gets dangerous or goes beyond the capabilities of the actor. To become a stunt double, you must be
in excellent physical condition and have special skills.
Instructions
1. Exercise regularly if you want to become a stunt double. Eat nutritiously for optimal health and strength.
2. Take lots of lessons because the more skills you have, the better. Gymnastics is extremely important in becoming a stunt double. Get good at trampoline,
skateboarding, swimming and high board diving. Take scuba diving lessons. Practice rock climbing and horseback riding. Learn to water ski and snow ski.
3. Enroll in martial arts classes, especially judo. Judo is excellent for learning how to break falls.
4. Get training in CP R(1) and First Aid. This training looks good on a résumé, especially for stunt double careers. Injuries happen.
5. Have valid driver's licenses for both car and motorcycle. Take advanced driving classes so you'll be qualified for difficult driving scenes.
6. Move to Hollywood and plan to work your way up from the bottom. You must get into the Screen Actors Guild(2) and have a union card(3).
One of the instructions below IS NOT stated in the text. Choose it.
www.tecconcursos.com.br/questoes/1270665
A stunt double stands in for the actor when the action or fight scene gets dangerous or goes beyond the capabilities of the actor. To become a stunt double, you must be
in excellent physical condition and have special skills.
Instructions
1. Exercise regularly if you want to become a stunt double. Eat nutritiously for optimal health and strength.
2. Take lots of lessons because the more skills you have, the better. Gymnastics is extremely important in becoming a stunt double. Get good at trampoline,
skateboarding, swimming and high board diving. Take scuba diving lessons. Practice rock climbing and horseback riding. Learn to water ski and snow ski.
3. Enroll in martial arts classes, especially judo. Judo is excellent for learning how to break falls.
4. Get training in CP R(1) and First Aid. This training looks good on a résumé, especially for stunt double careers. Injuries happen.
5. Have valid driver's licenses for both car and motorcycle. Take advanced driving classes so you'll be qualified for difficult driving scenes.
6. Move to Hollywood and plan to work your way up from the bottom. You must get into the Screen Actors Guild(2) and have a union card(3).
Look at the bold comparative form (item 2). Choose the option that contains a similar construction.
a) The earlier we get there, the more likely we are to get good seats.
c) The smoothest Channel crossing you’ll ever have! Why not fly to France with British Airways? It’ll be the best decision you’ve ever made.
www.tecconcursos.com.br/questoes/1270667
A stunt double stands in for the actor when the action or fight scene gets dangerous or goes beyond the capabilities of the actor. To become a stunt double, you must be
in excellent physical condition and have special skills.
Instructions
1. Exercise regularly if you want to become a stunt double. Eat nutritiously for optimal health and strength.
2. Take lots of lessons because the more skills you have, the better. Gymnastics is extremely important in becoming a stunt double. Get good at trampoline,
skateboarding, swimming and high board diving. Take scuba diving lessons. Practice rock climbing and horseback riding. Learn to water ski and snow ski.
3. Enroll in martial arts classes, especially judo. Judo is excellent for learning how to break falls.
4. Get training in CP R(1) and First Aid. This training looks good on a résumé, especially for stunt double careers. Injuries happen.
5. Have valid driver's licenses for both car and motorcycle. Take advanced driving classes so you'll be qualified for difficult driving scenes.
6. Move to Hollywood and plan to work your way up from the bottom. You must get into the Screen Actors Guild(2) and have a union card(3).
www.tecconcursos.com.br/questoes/1270671
www.tecconcursos.com.br/questoes/1270674
b) is grammatically correct.
www.tecconcursos.com.br/questoes/1270677
In the context of the song, the word “when” can be substituted for
a) while.
b) even though.
c) considering that.
d) by the time.
www.tecconcursos.com.br/questoes/1270684
Murderesses Velma Kelly (a woman who killed her husband and sister after finding them in bed together) and Roxie Hart (who killed her boyfriend when she discovered
he wasn't going to make her a star) find themselves on death row together and fight for the fame that will keep them from the gallows, in 1920s musical Chicago.
Mark the most appropriate option. According to the plot summary, the musical Chicago shows
www.tecconcursos.com.br/questoes/1270688
Murderesses Velma Kelly (a woman who killed her husband and sister after finding them in bed together) and Roxie Hart (who killed her boyfriend when she discovered
he wasn't going to make her a star) find themselves on death row together and fight for the fame that will keep them from the gallows, in 1920s musical Chicago.
The expression “ [...] the fame will keep them from the gallows [...]” means the fame
b) will go on.
www.tecconcursos.com.br/questoes/1306177
When football professional in South Africa in 1959, 12 clubs broke from the amateur ranks. However, in the strict days of Apartheid, these pioneers
whites-only organizations and today, all but a few, defunct. One of the survivors is Arcadia from Tshwane/ Pretoria, an outfit that today competes in the
amateur ranks and concentrates on junior football.
http://www.fifa.com/worldcup
Mark the alternative which completes the gaps from the text correctly.
a) had gone – have been – were
www.tecconcursos.com.br/questoes/1306181
When football went professional in South Africa in 1959, 12 clubs broke from the amateur ranks. However, in the strict days of Apartheid, these pioneers were whites-only
organizations and are today, all but a few, defunct. One of the survivors is Arcadia from Tshwane/ Pretoria, an outfit that today competes in the amateur ranks and
concentrates on junior football.
http://www.fifa.com/worldcup
a) days of Apartheid were extinguished as well as the prejudice against black football players.
c) there are no more organizations (professional or amateur) like the ones from the past.
d) in early 50’s in South Africa there weren’t amateur football clubs anymore.
www.tecconcursos.com.br/questoes/1306186
c) wants to have control over the Roman, Genghis Khan’s, and British Empires.
www.tecconcursos.com.br/questoes/1306196
Many South Africans remain poor and unemployment is high − a factor blamed for a wave of violent attacks against migrant workers from other African countries in 2008
and protests by township residents over poor living conditions during the summer of 2009.
Land redistribution is a crucial problem that continues existing. Most farmland is still white-owned. land acquisition on a "willing buyer, willing seller" basis,
officials have signaled that large-scale expropriations are on the cards. The government aims to transfer 30% of farmland to black South Africans by 2014.
http://news.bbc.co.uk/2/hi/africa/country_profiles/1071886.stm
Mark the alternative that completes the gap with the correct verbal tense.
a) Have
b) Has
c) Had
d) Having
Gabarito
1) B 2) A 3) A 4) B 5) B 6) D 7) D
8) D 9) C 10) B 11) C 12) A 13) C 14) A
15) B 16) D 17) D 18) D 19) C 20) C 21) B
22) D 23) D 24) C 25) A 26) C 27) B 28) C
29) A 30) B 31) D 32) B 33) C 34) C 35) A
36) C 37) A 38) D 39) D 40) D 41) A 42) B
43) A 44) D 45) D 46) B 47) B 48) A 49) A
50) A 51) D 52) C 53) B 54) A 55) B 56) C
57) A 58) B 59) A 60) A 61) B 62) C 63) B
64) B 65) B 66) C 67) B 68) A 69) D 70) B
71) A 72) C 73) B 74) C 75) B 76) C 77) C
78) D 79) D 80) B 81) C 82) C 83) A 84) D
85) C 86) C 87) B 88) D 89) A 90) B 91) C
92) D 93) A 94) B 95) B 96) A 97) A 98) D
99) C 100) B 101) B 102) D 103) D 104) A 105) B
106) C 107) A 108) D 109) D 110) D 111) C 112) B
113) A 114) D 115) D 116) C 117) B 118) A 119) C
120) A 121) D 122) C 123) B 124) D 125) B 126) A
127) C 128) D 129) B 130) A 131) A 132) B 133) D
134) B 135) D 136) C 137) D 138) B 139) C 140) D
141) B 142) A 143) D 144) C 145) D 146) D 147) C
148) D 149) D 150) B 151) C 152) A 153) C 154) B
155) B 156) A 157) A 158) B 159) B 160) D 161) A
162) C 163) D 164) A 165) A 166) B 167) C 168) C
169) B 170) A 171) C 172) B 173) B 174) D 175) C
176) C 177) D 178) C 179) B 180) A 181) D 182) B
183) C 184) A 185) C 186) B 187) C 188) D 189) C
190) A 191) B 192) A 193) B 194) D 195) A 196) D
197) B 198) B 199) D 200) D