You are on page 1of 31

SUBJECT RELATED TOPIC

Subtitle
KEY DIFFERENCE BETWEEN TEST AND EXPERIMENT
• A test or a psychological test used by a  • An experiment refers to an investigation in which
psychologist or a counselor to comprehend the validity of a hypothesis is tested in a scientific
the psychological makeup of an individual. manner.

• By conducting a test, the psychologist can • An experiment refers to an investigation in


comprehend and calculate certain attributes of which the validity of a hypothesis is tested in
the individual. a scientific manner.
• Most experiments require hypotheses.
• There are no hypotheses.
• They are the 
• Cause (certain attributes or trait) and possible
dependent variable and the independent variable.
explanations
• Usually the psychologist manipulates the
• Tests do not produce new knowledge but can be independent variable, in relation to which the
used to assist people and also to support dependent variable also reacts. Through this, the
experiments. cause and effect are studied.
• Tests centre on the individual’s psychological • Experiment: Experiments lead to new
construct. knowledge.
• Experiment: Experiments can go beyond a single
individual.
KEY DIFFERENCE BETWEEN QUALITATIVE VS QUANTITATIVE
RESEARCH METHODS
SAMPLING
• The process of selecting a number of individuals for a study in such a way that the individuals
represent the larger group from which they were selected.
• Sampling – Process of choosing a representative portion of the entire population. – an integral part
of research methodology. – involves selecting a group of people, events, behaviors or other elements
with which to conduct a study.
• Population is an accessible group of people who meets a well-defined set of eligibility criteria.

• Sample – Subset of the population that is selected for a study • Also called subjects or respondents
of the study.
• A sample size can be determined using the Slovin’s (1960) formula, which is as follows: N n =
--------------- 1 + Ne2 Where: n is the sample size N is the population size e is the margin of error (.05
or .01) 1 is a constant value
SAMPLING
• Probability Sampling• Involves the selection of elements from the population using random in which each element of the
population has an equal and independent chance of being chosen.
• Four Classification of Probability Sampling
• 1. Simple Random Sampling• Each member of the population has an equal chance of being included in the samples• Most
commonly used method is the lottery or Fish Bowl technique• In using the lottery method, there is a need for a complete
listing of the members of the population.• The names or codes of all members are written on pieces of paper cards and
placed in a container.• The researcher draws the desired number of sample from the container.• The process is relatively
easy for small population but relatively difficult and time consuming for a large population
• 2. Systematic Sampling Technique• Type of probability sampling which selects samples by following some rules set by the
researcher which involves selecting the Kth member where the random start is determined.• A system is a plan for
selecting members after a starting point or random start has been determined.• Then every nth member of the population
will be determined by the system in drawing or selecting the members of the sample
• 3. Stratified Random Sampling – Type of probability sampling which selects members of the sample proportionally from
each subpopulation or stratum. – Used when the population is too large to handle and is divided into subgroups (called
strata) – Samples per stratum are then randomly selected, but considerations must be given to the sizes of the random
samples to be drawn from the subgroups. – An example of procedure to use is proportional allocation which selects the
sample sizes proportional to the sizes of the different subgroups.
• 4. Cluster Sampling – Used when population is divided into groups or clusters – Samples are selected in groups rather than
individuals which is employed into a large-scale survey
• 5. Multi-Stage Sampling – Selects samples using more than two sampling techniques – Rarely used because of the
complexity of its application – Requires a lot of effort, time, and cost
SAMPLING
• Non-Probability Sampling – Involves the selection of elements from a population using nonrandom procedures.
• Characteristics of Non-Probability Sampling2. The members of sample are drawn or selected based on the judgment
of the researcher.4. The results of these techniques are relatively biased.6. The techniques lack objectivity in terms of
the selection of samples.8. The samples are not so reliable.5. The techniques are convenient and economical to use.
• Types of Non-Probability Sampling
• 1. Convenience or Accidental Sampling – Involves the nonrandom selection of subjects based on their availability or
convenient accessibility.
• 2. Quota Sampling – Involves the nonrandom selection of elements based on the identification of specific
characteristics to increase the sample’s representativeness.
• 3. Purposive of Judgmental Sampling – Involves the nonrandom selection of elements based on the researcher’s
judgment and knowledge about the population. – This is useful when a group of subjects is needed to participate in a
pretest of newly developed instruments or when a group of experts is desirable to validate research information.
• 4. Snowball Sampling : The chain referral process allows the researcher to reach populations that are difficult to
sample when using other sampling methods. – Cheap, simple and cost-efficient. – Little planning and fewer
workforce compared to other sampling techniques•
• A standard error is the standard deviation of the sampling distribution of a statistic. Standard error is a statistical
term that measures the accuracy with which a sample represents a population. In statistics, a sample mean deviates
from the actual mean of a population; this deviation is the standard error.
PARAMETRIC VS. NON PARAMETRIC TEST
When to use which statistical tests: Parametric or nonparametric?

Homogeneity of Variance
 The variance is a measure of the dispersion of the random variable about the mean. In other words, it indicates
how far the values spread out.
 It refers to that variance within each of population is equal.
 Homogeneity of Variances is assessed by Levene’s test. (T-test and ANOVA use Levene’s test.)
PARAMETRIC VS. NON PARAMETRIC TEST
When to use which statistical tests: Parametric or nonparametric?
KEY DIFFERENCES BETWEEN PARAMETRIC AND NONPARAMETRIC TESTS

• A statistical test, in which specific assumptions are made about the population parameter is known as
the parametric test. A statistical test used in the case of non-metric independent variables is called
nonparametric test.
• In the parametric test, the test statistic is based on distribution. On the other hand, the test statistic is
arbitrary in the case of the nonparametric test.
• In the parametric test, it is assumed that the measurement of variables of interest is done on interval or
ratio level. As opposed to the nonparametric test, wherein the variable of interest are measured on
nominal or ordinal scale.
• In general, the measure of central tendency in the parametric test is mean, while in the case of the
nonparametric test is median.
• In the parametric test, there is complete information about the population. Conversely, in the
nonparametric test, there is no information about the population.
• The applicability of parametric test is for variables only, whereas nonparametric test applies to both
variables and attributes.
• For measuring the degree of association between two quantitative variables, Pearson’s coefficient of
correlation is used in the parametric test, while spearman’s rank correlation is used in the
nonparametric test.
KEY DIFFERENCES BETWEEN PARAMETRIC AND NONPARAMETRIC TESTS

Definition of Parametric Test


The parametric test is the hypothesis test which provides generalisations
for making statements about the mean of the parent population. A t-test
based on Student’s t-statistic, which is often used in this regard.
The t-statistic rests on the underlying assumption that there is the normal
distribution of variable and the mean in known or assumed to be known.
The population variance is calculated for the sample. It is assumed that the
variables of interest, in the population are measured on an interval scale.

Definition of Nonparametric Test: defined as the hypothesis test which is


not based on underlying assumptions, i.e. it does not require
population’s distribution to be denoted by specific parameters.
It is mainly based on differences in medians. Hence, it is alternately
known as the distribution-free test. The test assumes that the variables
are measured on a nominal or ordinal level. It is used when the
independent variables are non-metric.
PARAMETRIC VS. NON PARAMETRIC TEST
Normal Distribution?
TYPE 1 AND TYPE 2 ERROR
REGRESSION
• What is Linear Regression?
• Linear regression is the most basic type of regression and commonly used predictive
analysis.  
• The overall idea of regression is to examine two things: (1) does a set of predictor
variables do a good job in predicting an outcome variable?  Is the model using the
predictors accounting for the variability in the changes in the dependent variable? (2)
Which variables in particular are significant predictors of the dependent variable?  And in
what way do they--indicated by the magnitude and sign of the beta estimates--impact the
dependent variable?  
• These regression estimates are used to explain the relationship between one dependent
variable and one or more independent variables. (3) What is the regression equation that
shows how the set of predictor variables can be used to predict the outcome?  
• The simplest form of the equation with one dependent and one independent variable is
defined by the formula y = c + b*x, where y = estimated dependent score, c = constant, b =
regression coefficients, and x = independent variable.
REGRESSION
• Uses of regression
• Three major uses for regression analysis are (1) causal analysis, (2) forecasting an effect, and (3)
trend forecasting. 
• Other than correlation analysis, which focuses on the strength of the relationship between two or
more variables, regression analysis assumes a dependence or causal relationship between one or
more independent variables and one dependent variable.
• Firstly, the regression might be used to identify the strength of the effect that the independent
variable(s) have on a dependent variable.  Typical questions are what is the strength of
relationship between dose and effect, sales and marketing spend, age and income.
• Secondly, it can be used to forecast effects or impact of changes.  That is, the regression analysis
helps us to understand how much the dependent variable change with a change in one or more
independent variables.  Typical questions are, "how much additional Y do I get for one additional
unit X?"
• Thirdly, regression analysis predicts trends and future values.  The regression analysis can be
used to get point estimates.  Typical questions are, "what will the price for gold be in 6 month
from now?"  "What is the total effort for a task X?“
REGRESSION
• What are the types of Regressions?
• Linear Regression
• Logistic Regression
• Polynomial Regression
• Stepwise Regression
• Ridge Regression
• Lasso Regression
• Elastic Net Regression
• There are multiple benefits of using regression analysis. They are as follows:
• 1. It indicates the significant relationships between dependent variable and independent variable.
• 2. It indicates the strength of impact of multiple independent variables on a dependent variable.
• Regression analysis also allows us to compare the effects of variables measured on different scales, such as
the effect of price changes and the number of promotional activities. These benefits help market
researchers / data analysts / data scientists to eliminate and evaluate the best set of variables to be used
for building predictive models.
REGRESSION
• There are several linear regression analyses available to the researcher.
• • Simple linear regression
1 dependent variable (interval or ratio), 1 independent variable (interval or ratio or dichotomous)
• • Multiple linear regression
1 dependent variable (interval or ratio) , 2+ independent variables (interval or ratio or dichotomous)
• • Logistic regression
1 dependent variable (binary), 2+ independent variable(s) (interval or ratio or dichotomous)
• • Ordinal regression
1 dependent variable (ordinal), 1+ independent variable(s) (nominal or dichotomous)
• • Multinominal regression
1 dependent variable (nominal), 1+ independent variable(s) (interval or ratio or dichotomous)
• • Discriminant analysis
1 dependent variable (nominal), 1+ independent variable(s) (interval or ratio)
• When selecting the model for the analysis, another important consideration is the model fitting.  Adding independent variables to a linear
regression model will always increase the explained variance of the model (typically expressed as R²).  However, adding more and more
variables to the model makes it inefficient and overfitting can occur.  Occam's razor describes the problem extremely well – a model should
be as simple as possible but not simpler.  Statistically, if the model includes a large number of variables, the probability increases that the
variables will be statistically significant from random effects.
• The second concern of regression analysis is under fitting.  This means that the regression analysis' estimates are biased.  Under fitting
occurs when including an additional independent variable in the model will reduce the effect strength of the independent variable(s). 
Mostly under fitting happens when linear regression is used to prove a cause-effect relationship that is not there.  This might be due to
researcher's empirical pragmatism or the lack of a sound theoretical basis for the model.
USES OF MEASURE OF CENTRAL TENDENCY AND SD
LEVEL OF MEASUREMENT OR SCALE
• Why Is Level of Measurement Important?
• Helps you decide what statistical analysis is appropriate on the values that were assigned
• Helps you decide how to interpret the data from that variable
• In Nominal: The values “name” the attribute uniquely. 2. The value does not imply any ordering of
the cases, for example, jersey numbers in football. 3. Even though player 32 has higher number than
player 19, you can’t say from the data that he’s greater than or more than the other.
• In Ordinal: 1. When attributes can be rank-ordered… 2. Distances between attributes do not have any
meaning,for example, code Educational Attainment as 0=less than H.S.; 1=some H.S.; 2=H.S. degree;
3=some college; 4=college degree; 5=post college. 3. Is the distance from 0 to 1 the same as 3 to 4?
• In interval: 1. When distance between attributes has meaning, for example, temperature (in
Fahrenheit) 2. Distance from 30-40 is same as distance from 70-80. 3. that ratios don’t make any
sense -- 80 degrees is not twice as hot as 40 degrees (although the attribute values are).
• In Ratio: 1. Has an absolute zero that is meaningful. 2. Can construct a meaningful ratio (fraction), for
example, number of clients in past six months. 3. It is meaningful to say that “...we had twice as many
clients in this period as we did in the previous six months.
LEVEL OF MEASUREMENT OR SCALE
CONTROL TECHNIQUES OF EXTRANEOUS VARIABLES
THEORIES OF ORGANIZATIONAL BEHAVIORS.
LEADERSHIP THEORY: CONTINGENCY THEORY OF LEADERSHIP
• Great Man Theory (1840s): The Great Man theory assumes that the traits of leadership are intrinsic. That simply means that great
leaders are born not made. Furthermore, the belief was that great leaders will rise when confronted with the appropriate
situation. The theory was popularized by Thomas Carlyle.

• Trait Theory (1930's - 1940's)


• The trait leadership theory believes that people are either born or are made with certain qualities that will make them excel in
leadership roles. That is, certain qualities such as intelligence, sense of responsibility, creativity and other values puts anyone in
the shoes of a good leader. In fact, Gordon Allport, an American psychologist,"...identified almost 18,000 English personality-
relevant terms"
• The trait theory of leadership focused on analyzing mental, physical and social characteristic in order to gain more understanding
of what is the characteristic or the combination of characteristics that are common among leaders.
• Behavioural Theories (1940's - 1950's)
• Contingency Theories (1960's)
• Fiedler's contingency theory is one of the contingency theories that states that effective leadership depends not only on the style
of leading but on the control over a situation. There needs to be good leader-member relations, task with clear goals and
procedures, and the ability for the leader to mete out rewards and punishments. Lacking these three in the right combination and
context will result in leadership failure. Fiedler created the least preferred co-worker (LPC) scale, where a leader is asked what
traits can be ascribed to the co-worker that the leader likes the least.
• Transactional leadership Theories (1970's)
• Transformational Leadership Theories (1970s)
CURRENT THEORY OF PERSONALITY
TYPE A TYPE B PERSONALITY
• Type A and Type B personality theory describes two contrasting 
personality types. In this theory, personalities that are more competitive, outgoing,
ambitious, impatient and/or aggressive are labeled Type A, while more relaxed
personalities are labeled Type B.
• The two cardiologists who developed this theory came to believe that Type A
personalities had a greater chance of developing coronary heart disease. Following
the results of further studies and considerable controversy about the role of the 
tobacco industry funding of early research in this area, some reject, either partially
or completely, the link between Type A personality and coronary disease.
Nevertheless, this research had a significant effect on the development of the 
health psychology field, in which psychologistslook at how an individual's mental
state affects their physical health.
WHY PSYCHOLOGY KNOWN AS SCIENCE?
• The short answer to the question of whether psychology is an art or science is “yes.”
• Science studies overt behavior because overt behavior is objectively observable and can be measured,
allowing different psychologists to record behavior and agree on what has been observed. This
means that evidence can be collected to test a theory about people.
• Science refers to a system of acquiring knowledge [based on] observation and experimentation to
describe and explain natural phenomena.
• Cause and effect phenomena
• In many ways, it is both. There are branches within psychology that are strictly devoted to the
understanding the human mind and behavior through rigorous scientific experimentation.
• Psychologists conduct basic and applied research, serve as consultants to communities and
organizations, diagnose and treat people, and teach future psychologists and those who will pursue
other disciplines. They test intelligence and personality.
• But the practice of psychology as a professional discipline is more than simply the mechanical
implementation of proven scientific techniques.
• Rather, it requires the practitioner’s use of professional experience, manner of delivery, empathic
intuition, and judgment. So, the professional practice of psychology is definitely an art.
DIFFERENCE BETWEEN SOCIAL PSYCHOLOGY AND SOCIOLOGY?
• Social psychology, put simply, is the study of people in a
group. Sociology is the study of groups of people. Social psychology is
interested in how the group affects the individual and vice-
versa. Sociology is interested in how the group behaves and how
groups interact with each other and society.
CLINICAL PSYCHOLOGY- APPLICATION BASED QUESTIONS
STEPS IN SCALE CONSTRUCTION
STEPS IN SCALE CONSTRUCTION

Conceptual variables (or constructs) form the basis of research


hypotheses and theories. Examples are reading time; attitudes
toward the Euro; self-esteem; depression; autism.
Operational definitions specify the procedures how to turn a
construct into a measured variable.
ATTITUDE SCALE
• The semantic differential technique of Osgood et al. (1957) asks a
person to rate an issue or topic on a standard set of bipolar
adjectives (i.e. with opposite meanings), each representing a seven
point scale.
• Likert scale: 5 point by Rensis Likert
• Thurstone’s Equal Apearing Interval Method: 11 Unfavourable to
Favourable
• Bogardus social distance scale 7 point
• Guttman comulative scaling method or scalogram analysis
unidimentional two dimentional yes no
THEORIES OF INTELLIGENCE
It is the ability to acquire and apply knowledge and skill.
“Intelligence is the aggregate or global capacity of the individual to act purposefully,
to think rationally and to deal effectively with his environment (Wechsler, 1944).

You might also like