You are on page 1of 44

RESEARCH AND PUBLICATION ETHICS

Introduction to Philosophy

1.Definition

MEANING OF PHILOSOPHY Philosophy is the study of general and fundamental


problems, such as those connected with existence, knowledge, values, reason, mind, and
language. Philosophy is the rational attempt to formulate, understand, and answer fundamental
questions.

Philosophy is the rational attempt to formulate, understand, and answer fundamental questions.

2.NATURE OF PHILOSOPHY

Philosophy is a rational attempt to look at the world as a whole.

Philosophy seeks to combine the conclusions of the various sciences and human
experience into some kind of consistent world view. Philosophers wish to see life, not with the
specialized slant of the scientist or the businessperson or the artist, but with the overall view of
someone cognizant of life as a totality.

Philosophy is the logical analysis of language and the clarification of the meaning of words
and concepts.

Certainly this is one function of philosophy. In fact, nearly all philosophers have used
methods of analysis and have sought to clarify the meaning of terms and the use of language.
Some philosophers see this as the main task of philosophy, and a few claim this is the only
legitimate function of philosophy.

Philosophy is a group of perennial problems that interest people and for which philosophers
always have sought answers.

Philosophy presses its inquiry into the deepest problems of human existence. Some of the
philosophical questions raised in the past have been answered in a manner satisfactory to the
majority of philosophers. Many questions, however, have been answered only tentatively, and
many problems remain unsolved.

 ―What is truth?‖
 ―What is the distinction between right and wrong?‖
 What is life and why am I here?
 Why is there anything at all?

3.Scope of philosophy

As a philosophical endeavour Systems Philosophy is concerned with the classical purposes of


philosophy, namely to:

1. Clear up confusions in our concepts and ways of thinking (logic and analysis);

2. Reflect on the nature of knowledge and how we can obtain it (epistemology);

3. Put our knowledge into some kind of order so we can see what is known, what is
unknown, and see the 'big picture' it suggests (knowledge maps and a worldview
including a world picture and a lifeview);

4. Comment on the scope and nature of gaps in our knowledge, and the meaning this has for
interpreting the 'big picture', and give guidance about which gaps are closable, how that
might be done, what closing them would mean, and how to prioritise the work to close
them (big questions, vision, research agenda);

5. Contribute to answering questions of ultimate concern by reasoning from principles or


empirical evidence, or suggesting new ways in which relevant evidences might be
gathered, or helping investigators to understand the meaning and utility of the worldview
suggested by these philosophical investigations.

Branches of Philosophy

Philosophy consists of three parts:

(1) Epistemology;
(2) Ontology and Metaphysics, and
(3) Axiology.
Epistemology is the theory of Knowledge. Ontology or Metaphysics is the theory of
Being or Reality. Axiology is the theory of Values. Modern philosophy is not dogmatic. It does
not plunge into metaphysical investigation of the nature of reality without a prior criticism of the
organ of knowledge. It is based on epistemology. Epistemology enquires into the nature, origin,
validity and extent of knowledge.

Is experience or reason the source of knowledge?

Does knowledge represent the reality?

Epistemology

Epistemology seeks to answer these questions. It has a dominant place in contemporary


philosophy. It is a preliminary to metaphysical speculation. It is a prior criticism of the organ of
knowledge. Ontology or Metaphysics is the theory of Being. It enquires into the nature of the
reality. It investigates the nature of the world including matter and life, of the soul, and of God or
the Absolute.

Ontology

Ontology of Nature, Ontology of the Soul or Mind, and Ontology of the Absolute are the
three essential parts of metaphysics. Ontology of Nature investigates the nature of matter, time,
space, causality, life, evolution, mechanism, and teleology.

Ontology of the soul investigates the nature, origin and destiny of the soul, and its relation to
body. Ontology of God investigates, the nature and attributes of God and his relation to the world
and‘ the souls. It discusses and examines proofs for the existence of God.

Axiology

Axiology is the theory of values or ideals. Values are the supreme norms of life. Logic
investigates the nature of Truth. Ethics investigates the nature of Good. AEsthetics investigates
the nature of Beauty. Theology investigates the nature of the Holy.

Axiology enquires into the nature of intellectual, moral, esthetic, and religious values. It
investigates the relation of values to reality. It enquires into their subjectivity or objectivity. It is
a very important branch of contemporary philosophy. Tile problem of values is in the forefront
of recent philosophy.

2.Ethics

Research ethics provides guidelines for the responsible conduct of research. In addition, it
educates and monitors scientists conducting research to ensure a high ethical standard.
Research ethics defines the way to incorporate ethical principles into research
practice in all stages of investigation, from planning and inception through to completion and
dissemination of results.

From: Reference Module in Neuroscience and Biobehavioral Psychology, 2020

2.1 Moral Philosophy

The field of ethics, or moral philosophy, investigates theories that can systematically
describe what makes acts right or wrong. Moral philosophy is usually divided into three
categories:

metaethics, applied ethics, and normative ethics. Metaethics investigates where our
moral values, language, and principles come from and what they mean; it is concerned with
―what is morality?‖ rather than ―what is moral?‖

Applied ethics seeks to apply philosophical tools to examine specific controversial


issues and provide practical solutions to moral problems.

Normative ethics investigates the moral standards that regulate right and wrong
conduct. Theories within normative ethics include utilitarianism, consequentialism,
contractualism, virtue ethics, and more.

Moral judgment and reaction

The moral judgement is the judgement which deals with the moral value or quality of
an action. It is a judgement of value and it evaluates the rightness or wrongness of our actions.
When we analyse a moral judgement then we find that it contains a) a subject which will judge,
b) an object whose action will be judged, c) a standard in conformity to which the action of the
subject will be judged and d) a power of judging the action as required. Moral judgment is the
judgment of moral quality of voluntary habitual actions. Generally, a moral judgment is given on
the voluntary and habitual actions of a rational being. The voluntary actions of a rational person
which involve deliberation, choice, and resolution, have the moral quality of rightness and
wrongness. They are considered to be right or wrong with the reference to the moral standard.
And on the basis of this standard, moral judgment is given. If the voluntary actions have
conformity with the standard or the ideal, then the moral judgment will express it as the right
action. If the action has conflict with the standard or norms, then the moral judgment will express
it as wrong. So, moral judgment involves comparison of voluntary acts with the moral standard.

 Moral judgment is active in nature. Because moral judgment is given upon voluntary and
habitual acts of persons and not upon their passive experiences.

 Moral judgment is social in character. Because, as we know, voluntary acts of a person


are right or wrong, because they more or less affect the of interest of others. Man is a
social being. His rights and duties of actions rise out of his relation to other persons in
society. So, moral judgment, apart from society is inconceivable. Moral judgment can be
said to be obligatory in character. Because a judgment can be given as right, while we
feel the moral obligation to do it. Similarly, moral judgment is given on an act as wrong,
when we feel the moral obligation to refrain from it. Thus, moral judgment is always
accompanied by the sense of duty or moral obligation. And this moral obligation is
essentially self-imposed. In this way, we can find out the meaning of moral judgment.
Ethics in research

Research is the most important and fundamental activities of human society and has
been singularly responsible for all the technological and economical advances that we enjoy.
When carried out ethically, it provides lasting pleasure and satisfaction to researcher also. Any
short-cuts to achieve some pleasure/recognition in short-term may harm not only the researcher
in more than one way in the long run, but also often have more lasting and wider implications in
mis-directing efforts of other researchers with unwanted consequences. Therefore, effective
training of enthusiastic young researchers in good ethical practices is as important as training
them effectively in their chosen disciplines.

Research Integrit

Research integrity may be defined as active adherence to the ethical principles and
professional standards essential for the responsible practice of research.

By active adherence we mean adoption of the principles and practices as a personal credo, not
simply accepting them as impositions by rulemakers.

By ethical principles we mean honesty, the golden rule, trustworthiness, and high regard for the
scientific record.

NAS report definition: "For individuals research integrity is an aspect of moral character and
experience. It involves above all a commitment to intellectual honesty and personal
responsibility for ones actions and to a range of practices that characterize responsible research
conduct." These practices include:

1. Honesty and fairness in proposing, performing, and reporting research;


2. Accuracy and fairness in representing contributions to research proposals and reports;
3. Proficiency and fairness in peer review;
4. Collegiality in scientific interactions, communications and sharing of resources;
5. Disclosure of conflicts of interest;
6. Protection of human subjects in the conduct of research;
7. Humane care of animals in the conduct of research;
8. Adherence to the mutual responsibilities of mentors and trainees."

While science encourages (no, requires) vigorous defense of one's ideas and work, ultimately
research integrity means examining the data with objectivity and being guided by the results
rather than by preconceived notions

We will return to the importance of preserving the integrity of the scientific record in the section
on misconduct

Scientific misconduct

Scientific misconduct is the violation of the standard codes of scholarly conduct and ethical
behavior in the publication of professional scientific research.

 Danish definition: "Intention or gross negligence leading to fabrication of the scientific


message or a false credit or emphasis given to a scientist"

The consequences of scientific misconduct can be damaging for perpetrators and journal
audience[3][4] and for any individual who exposes it.[5] In addition there are public health
implications attached to the promotion of medical or other interventions based on false or
fabricated research findings.

Common Types of Scientific Misconduct

1. Misappropriation of Ideas – taking the intellectual property of others, perhaps as a


result of reviewing someone else‘s article or manuscript, or grant application and
proceeding with the idea as your own.

2. Plagiarism – utilizing someone else‘s words, published work, research processes, or


results without giving appropriate credit via full citation.

3. Self-plagiarism – recycling or re-using your own work without appropriate disclosure


and/or citation.

4. Impropriety of Authorship – claiming undeserved authorship on your own behalf,


excluding material contributors from co-authorship, including non-contributors as
authors, or submitting multi-author papers to journals without the consensus of all named
authors.

5. Failure to Comply with Legislative and Regulatory Requirements – willful violations


of rules concerning the safe use of chemicals, care of human and animal test subjects,
inappropriate use of investigative drugs or equipment, and inappropriate use of research
funds.
6. Violation of Generally Accepted Research Practices – this can include the proposal of
the research study, manipulation of experiments to generate preferred results, deceptive
statistical or analytical practices to generate preferred results, or improper reporting of
results to present a misleading outcome.

7. Falsification of Data – rather than manipulate the experiments or the data to generate
preferred results, this transgression simply fabricates the data entirely.

8. Failure to Support Validation of Your Research – by refusing to supply complete


datasets or research material needed to facilitate validation of your results through a
replication study.

9. Failure to Respond to Known Cases of Unsuccessful Validation Attempts – published


research that is found to be flawed should be retracted from the journal that published it.

10. Inappropriate Behavior in Relation to Suspected Misconduct – failure to cooperate


with any claims of misconduct made against you, failure to report known or suspected
misconduct, destruction of any evidence related to any claim of misconduct, retaliation
against any persons involved in a claim of misconduct, knowingly making false claims of
misconduct.

Falsification

Falsification is the changing or omission of research results (data) to support claims, hypotheses,
other data, etc. Falsification can include the manipulation of research instrumentation, materials,
or processes. Manipulation of images or representations in a manner that distorts the data or
―reads too much between the lines‖ can also be considered falsification.

Fabrication

Fabrication is the construction and/or addition of data, observations, or characterizations that


never occurred in the gathering of data or running of experiments. Fabrication can occur when
―filling out‖ the rest of experiment runs, for example. Claims about results need to be made on
complete data sets (as is normally assumed), where claims made based on incomplete or
assumed results is a form of fabrication.

Plagiarism

Appropriation of another person‘s idea, processes, results or words without giving the
appropriate credit

Plagiarism is, perhaps, the most common form of research misconduct. Researchers must be
aware to cite all sources and take careful notes. Using or representing the work of others as your
own work constitutes plagiarism, even if committed unintentionally. When reviewing privileged
information, such as when reviewing grants or journal article manuscripts for peer review,
researchers must recognize that what they are reading cannot be used for their own purposes
because it cannot be cited until the work is published or publicly available

Type of plagiarism

Direct Plagiarism :

Direct plagiarism is the word-for-word transcription of a section of someone else‘s work,


without attribution and without quotation marks. The deliberate plagiarism of someone else's
work is unethical, academically dishonest, and grounds for disciplinary actions, including
expulsion. [See examples.]

Self Plagiarism:

Self-plagiarism occurs when a student submits his or her own previous work, or mixes
parts of previous works, without permission from all professors involved. For example, it would
be unacceptable to incorporate part of a term paper you wrote in high school into a paper
assigned in a college course. Self-plagiarism also applies to submitting the same piece of work
for assignments in different classes without previous permission from both professors.

Mosaic Plagiarism:

Mosaic Plagiarism occurs when a student borrows phrases from a source without using
quotation marks, or finds synonyms for the author‘s language while keeping to the same general
structure and meaning of the original. Sometimes called ―patch writing,‖ this kind of
paraphrasing, whether intentional or not, is academically dishonest and punishable – even if
you footnote your source

Accidental Plagiarism:

Accidental plagiarism occurs when a person neglects to cite their sources, or misquotes
their sources, or unintentionally paraphrases a source by using similar words, groups of words,
and/or sentence structure without attribution. (See example for mosaic plagiarism.) Students
must learn how to cite their sources and to take careful and accurate notes when doing research.
(See the Note-Taking section on the Avoiding Plagiarism page.) Lack of intent does not absolve
the student of responsibility for plagiarism. Cases of accidental plagiarism are taken as seriously
as any other plagiarism and are subject to the same range of consequences as other types of
plagiarism.
Redundant publication

Redundant publication occurs when multiple papers are written without reference in the text, and
share the same text, data or results

Duplicate publication

Duplicate publication occurs when an author reuses substantial parts of his or her own published
work without providing the appropriate references. This can range from publishing an identical
paper in multiple journals, to only adding a small amount of new data to a previously published
paper.

Duplicate publication, multiple publication, or redundant publication refers to publishing


the same intellectual material more than once, by the author or publisher. It does not refer to the
unauthorized republication by someone else, which constitutes plagiarism, copyright violation,
or both.

Multiple submission is not plagiarism, but it is today often viewed as academic misbehavior[

Duplicate &Overlapping publication

Double publication is deemed to have occurred if the same manuscript, e.g. a scientific article, is
published more than once. In this connection, we speak of primary and secondary publication.
There are many examples of legitimate double publication, including:

 Publication of an article in several languages

 A reprint of a previously published scientific article in an anthology that brings together a


number of significant scientific contributions in the field in question

 Publication of a contribution to a Festschrift, which often has only a few readers, in a


scientific journal

 Parallel publication in an open access repository

Double publication without a clear indication that the manuscript in question has been previously
published is generally considered to be a breach of responsible research practice.

1. Answering the same question with different datasets. In another example, authors may
use the same experiment in two locations and publish the results from each location
separately.

2. Splitting apart data collected in the same system to answer different questions (a.k.a.
data fragmentation, salami slicing, piecemeal publication.
3. Augmenting previously published data with a smaller dataset that may not be able to
stand on its own (a.k.a. data augmentation, meat extending).

The ‗slicing‘ of research that would form one meaningful paper into several different papers is
called ‗salami publication‘ or ‗salami slicing‘.

Selective reporting

Selective reporting bias is when results from scientific research are deliberately not fully or
accurately reported, in order to suppress negative or undesirable findings. The end result is that
the findings are not reproducible, because they have been skewed by bias during the analysis or
writing stages.

Methods misreporting or mis presenting

 Changing objectives or hypothesis to conform to the results.


 Not distinguishing prespecified from post hoc analyses.
 Failing to report protocol deviations.

The misrepresentation of research findings may arise for a number of reasons. It may be wilful,
dishonest, accidental, partisan, political, ignorant, biased, careless or any combination of these.

Common ways in which research findings are misrepresented are explored under the following
sub-headings:

 flawed research
 using findings out of context
 stretching findings
 distorting findings
 rejecting or ignoring findings
Publication ethics

Ethical standards for publication exist to ensure high-quality scientific publications, public
trust in scientific findings, and that people receive credit for their ideas.

Researchers should conduct their research - from research proposal to publication - in line with
best practices and codes of conduct of relevant professional bodies and/or national and
international regulatory bodies. In rare cases it is possible that ethical issues or misconduct could
be encountered in your journal when research is submitted for publication. Researchers should
conduct their research - from research proposal to publication - in line with best practices and
codes of conduct of relevant professional bodies and/or national and international regulatory
bodies. In rare cases it is possible that ethical issues or misconduct could be encountered in your
journal when research is submitted for publication.

What is the importance of publication?

Publications can also be regarded as an asset that enables authors to gain recognition and
acknowledgement as experts in a particular field at national and international
levels. Publication in peer-reviewed journals also gives international recognition for an
individual, department, university, and institutions.

Best Practices for Conducting Research

Investigators should review over the information listed below prior to initiating a new research
study as it will help to ensure regulatory compliance and good clinical practices.

 Know and observe applicable federal regulations, state law and institutional SOPs and/or
policies.
 Know and observe your department‘s policies and procedures for research study-related
activities.
 Know and follow the IRB-approved protocol.
 Know the study-related roles and responsibilities of the principal investigator and other
research team members.
 Differentiate between the study-related and healthcare provider roles and responsibilities.
 Review the protocol with the research team members, identify and discuss any concerns
or questions regarding conduct of the study.

COPE (Committee on Publication Ethics) is committed to educating and supporting editors,


publishers and those involved in publication ethics with the aim of moving the culture of
publishing towards one where ethical practices become a normal part of the publishing culture.
Our approach is firmly in the direction of influencing through education, resources and support
of our members, alongside the fostering of professional debate in the wider community.

Over 20 years, COPE has grown to support members worldwide, from all academic fields. Our
members are primarily editors, but also publishers and related organisations and individuals.
After a period of consultation with the Trustees and Council, and feedback from our members,
the COPE strategic plan was developed to guide the organisation and its activities.

What is WAME? World Association of Medical Editors

Established in 1995, WAME (pronounced ―whammy‖) is a 501(c)(3) nonprofit voluntary


association of editors of peer-reviewed medical journals from countries throughout the world
who seek to foster international cooperation among and education of medical journal editors.
Membership in WAME is free and all decision-making editors of peer-reviewed medical journals
are eligible to join. Membership is also available to selected scholars in journal editorial policy
and peer review. WAME has more than 1830 members representing more than 1000 journals
from 92 countries (as of July 27, 2017). See WAME's History.

WAME has the following goals:

 to facilitate worldwide cooperation and communication among editors of peer-reviewed medical


journals;
 to improve editorial standards, to promote professionalism in medical editing through education,
self-criticism and self-regulation;
 to encourage research on the principles and practice of medical editing.
WAME's founding members also agreed that members of WAME shall be dedicated to high
ethical and scientific principles in the pursuit of the following common goals:
 to publish original, important, well-documented peer-reviewed articles on clinical and laboratory
research;
 to provide continuing education in basic and clinical sciences to support informed clinical
decision making;
 to enable physicians to remain informed in one or more areas of medicine;
 to improve public health internationally by improving the quality of medical care, disease
prevention and medical research;
 to foster responsible and balanced debate on controversial issues and policies affecting medicine
and health care;
 to promote peer review as a vehicle for scientific discourse and quality assurance in medicine
and to support efforts to improve peer review;
 to achieve the highest level of ethical medical journalism;

Conflicts of interests

It is a situation in which financial or other personal considerations fromauthors or reviewers have


the potential to compromise or bias professional judgment and objectivity.

 Participants in the peer-review and publication process not disclosing relationships


 Investigators not disclose the relationship with a company or person's association with
the organization.
 Not disclosing the funding source

Authors and reviewers should declare all conflicts of interest relevant to the work under
consideration (i.e. relationships, both financial and personal, thatmight interfere with the
interpretation of the work) to avoid the potential forbias.

Conflicts of Interest in research

Researchers‘ interests can and often do conflict with one another.

Research:

 Advances knowledge,
 Leads to discoveries that will benefit individuals and society,
 Furthers professional advancement,
 Results in personal gain and satisfaction.
 The advancement of knowledge is usually best served by sharing ideas with colleagues,
putting many minds to work on the same problem.
 But personal gain is sometimes best served by keeping ideas to oneself
 until they are fully developed and then protected through patents,copyrights, or
publications.

Conflicts of commitment

The following activities requires time and makes demands on a researcher‘s institutional
commitments.

 Working on one or more funded projects;


 Preparing to submit a request for a new project;
 Teaching students;
 Attending professional meetings and giving lectures;
 Serving as a peer reviewer;
 Sitting on advisory boards; or Working as a paid consultant, officer, or employee in a
privatecompany.

Care needs to be taken to assure that these commitments do not inappropriately interfere with
one another.

Allocation of ti

Researchers must be careful to follow rules for the allocation of time.At a minimum, these rules
require that researchers:

 Honor time commitments they have made, such as devoting aspecified percentage of time
to a grant or contract;
 Refrain from charging two sources of funding for the same time; and
 Seek advice if they are unsure whether a particular commitment of time is allowed under
an institution‘s or the Government‘s policies.

Personal and intellectual conflicts

Researchers are also expected to avoid bias in proposing,conducting, reporting, and reviewing
research.They therefore should be careful to avoid making judgments orpresenting conclusions
based solely on personal opinion oraffiliations rather than on scientific evidence.Researchers
generally should not serve as reviewers for grantsand publications submitted by close colleagues
and students.Most granting agencies require reviewers to disclose conflicts ofinterest, including
personal conflicts, as a condition of service.

Financial conflicts

 Researchers are permitted to benefit financially from their work.


 Undeclared financial interests may seriously undermine the credibility of the journal, the
authors, and the science itself.
 An example might be an investigator who owns stock in a pharmaceutical company that
is commissioning the research.

Publication Misconduct

To respect the intellectual property rights of others and uphold the standards for academic
publishing, New Delhi Publishers is adopting a zero tolerance policy towards papers associated
with publication misconduct. Publication misconduct includes plagiarism, fabrication,
falsification, inappropriate authorship, duplicate submission/multiple submissions, overlapping
publication, and salami publication. According to the definition of publication misconduct by the
China National Knowledge Infrastructure (CNKI)
(http://check.cnki.net/Article/rule/2012/12/542.html), we have developed New Delhi Publisher's
definitions, policies and Grammarly standards for publication misconduct, which are as follows:

1 Plagiarism: Plagiarism is the appropriation of another person's thoughts, ideas, data, figures,
research methods, or words without giving appropriate credit, or the over-citation of another
person's published work.

2 Fabrication: Fabrication is the practice of making up data or results without having performed
relevant research.

3 Falsification: Falsification is the practice of changing data or results intentionally such that
misleading conclusion is drawn.

4 Inappropriate authorship: Authorship is not appropriately assigned based on the author's


contributions.

5 Duplicate submission/multiple submissions: Duplicate submission/multiple submissions


refers to practice of submitting the same manuscript or several manuscripts with minor
differences (e.g., differences only in title, keywords, abstract, author order, author affiliations, or
a small amount of text) to two or more journals at the same time, or submitting to another journal
within an agreed or stipulated period.

6 Overlapping publication: Overlapping publication refers to the practice of publishing a paper


overlaps substantially with one already published.

7 Salami publication: Salami publication refers to the practice of slicing data from a large
study,could have been reported in a single paper, into different pieces and publishing them in
two or more articles, all of which cover the same population, methods, and question.

8 Inappropriate authorship: Authorship is not appropriately assigned based on the author's


contributions.

Once we find papers associated with any of the above publication misconduct, we will:

 1 Reject the manuscript or withdraw the published paper.


 2 Not accept manuscripts submitted to all New Delhi Publisher's journals by the same
research team within two years.
 3.Inform the institution the corresponding author is affiliated with and the funder(s) about
such misconduct.
 4 Release all penalty documents on the New Delhi Publisher site;
 5 And charge $ 1200 or 72000 Rs as compensation costs in case that a published paper is
withdrawn.
In addition, to fight against plagiarism and to ensure high ethical standards for all of the
published papers, New Delhi Publisher has joined Grammarly in 2012. Grammarly is an
effective tool for detecting unoriginal content, enabling our editors to preserve the journal's
integrity and the authors' copyright.

Violation of publication ethics

Violation of publication ethics is a global problem which includes duplicate submission,


multiple submissions, plagiarism, gift authorship, fake affiliation, ghost authorship, pressured
authorship, salami publication and fraud (fabrication and falsification)[2,3] but excludes the
honest errors committed by the authors

Publication ethics are violated by all those activities which threaten the integrity of the research
publication process. These include authors dispute, fake affiliations, conflicts of interest, dual
submissions, duplicate publication, plagiarism, salami slicing, fabrication and falsification. It
affects the scientific community, journal editors, peer reviewers but the ultimate victims are the
patients.

Authorship Conflict or violation

A research project generally includes multiple contributors. In a typical scenario, a principal


investigator (PI), graduate or undergraduate students, as well as technical staff are involved in a
research project. While translating the research findings into a manuscript, all the authors and
contributors must be duly listed. Some researchers often try to reduce the number of authors in
order to increase the effective contribution of each author. Sometimes, researchers include the
names of their colleagues or PIs in the list of authors for one of the manuscripts in order to return
a ‗favor‘, even though they may not have contributed to the corresponding study.

An important part of assigning authorship is the order in which authors are listed. In the author
list, the first and last positions are the most sought-after ones. The first author is the primary
author, i.e. the one who contributed significantly to the study design, conducting the study,
and/or collecting critical data. The last author is usually the PI of the study who along with the
first author has conceptualized the research and helps in acquiring the funding. Nevertheless, the
order in which should be listed is often a source of confusion, which may lead to ethical issues at
the time of submission.

These guidelines recommend that authorship be awarded to those who make a substantial
contribution to (I) conception and design; (II) acquisition of or analysis and interpretation of
data; (III) drafting the article or revising it critically for important intellectual content; and (IV)
final approval of the version to be published
Contributorship

Contributing to research can broadly be classified into the following categories:

 Intellectual (ideas, writing)


 Practical (conducting research, data analysis)
 Financial (funds, experimental material)

Any researcher, who does not meet all four ICMJE criteria for authorship discussed above
should be listed as a contributor. A technician or student who has only prepared some stock
solutions for chemical or biological reactions, for example, should not be listed as an author.
Instead, their contributions should be listed in the ‗Acknowledgements‘ section of the article.

According to ICMJE, those who do not qualify to be authors but made a contribution to the study
should be acknowledged.

Complaints

2.1 What kinds of complaint will we consider?

Complaints may relate to a failure of process (e.g. lengthy delays) or a severe misjudgement (e.g.
an improperly applied retraction notice). They may also relate to author or reviewer misconduct.
Complaints may be made by anyone, including authors, reviewers and readers.
All complaints must be within the scope of the MedEdPublish Editorial Office‘s remit – i.e.
related to the content, policies or processes of the journal. We will not consider complaints
where the complainant simply disagrees with a decision taken by the Editorial team (see appeals
process below).

2.2 How to make a complaint

Complaints should be emailed to mededpublish@dundee.ac.uk. Please provide as much detail as


possible and include supporting information where appropriate (for example, copies of email
correspondence).
If your complaint relates to a specific article, please include the title and DOI if it is already
published and the manuscript ID number if it is unpublished.

2.3 How we handle complaints

We aim to formally acknowledge all complaints within five working days. Please note that the
editorial office is not staffed at weekends. Where possible we will provide a full response within
four weeks. Where this is not possible we will provide regular interim communications, at least
every four weeks.

Complaints will be dealt with by the editorial staff wherever possible, with reference to our
policies and guidelines, but will be escalated to the Editor where necessary. The Editor has the
right to then consult with any third party over the issue, and make a final decision. That final
decision shall be binding, and the matter shall be deemed closed.

Where a serious complaint is made about an Editor, it will be independently investigated by two
members of the Editorial Board. The purpose of the investigation is to establish that correct
procedures have been followed, that decisions have been reached based on academic criteria and
that personal prejudice or bias has not influenced the outcome.

2.4 Complaints or concerns about author or reviewer misconduct

If you wish to complain or raise a concern about suspected author or reviewer misconduct, please
refer to our editorial policy for more detail about our processes for dealing with allegations and
the kind of evidence we might require. The process for raising these complaints and concerns is
the same as above.

Concerns may include, but are not limited to

 Suspicion of an ethical problem with a manuscript (including undeclared conflicts of


interest, false ethical declarations, use of identifiable images without consent or use of
copyright images without permission)

 Suspicion of unethical image manipulation in a published article

 Suspected manipulation of the publication process (including practices such as duplicate


publication, self-plagiarism, salami-slicing or excessive self-citation)

3. Appeals

We will consider appeals against the Editor‘s decision only under highly specific circumstances
and usually only where a clear breach of policy can be demonstrated.

3.1 Rejected manuscripts

The most common reasons for rejecting manuscripts are:

 The article content is not within the scope of the journal;


 The article is not written in clear and intelligible English;
 Authors have not completed the relevant declarations relating to ethics and funding;
 The article does not conform to our ‗Guidelines for Authors‘ in terms of content, style
and/or formatting.

Articles will not usually pass initial editorial screening until the first three of these have been
addressed. In the last two instances, articles are usually reopened to authors to allow changes to
be made within a 6-week window. Failure to meet this deadline will result in automatic rejection
of the manuscript. Where an article has been accepted by the editors and the article processing
charge (APC) has been paid, but authors subsequently fail to make required changes within the
6-week period, the article will be rejected and the APC will be non-refundable.

If the article has been accepted but serious legal or ethical issues come to light after payment of
the APC, e.g. relating to research ethics, copyright, or conflicts of interest which render the
article unpublishable, and which we could not reasonably have foreseen, the article will be
rejected and the APC will not be refunded.

We will not consider appeals against the Editor’s decision under any of these
circumstances.

It is the authors‘ responsibility to provide the correct contact details, to monitor correspondence
from our office, to respond promptly using the correct email address, and to comply with our
requirements. Where a manuscript has been rejected because authors have failed to meet the
revision deadline, resubmission is possible but standard fees will be payable.

3.1.1 Rejection of revised articles

Revised articles will not usually be rejected provided they conform to our guidelines for revised
versions. We will not consider appeals against the Editor‘s decision to reject a revised article if it
does not meet our requirements.
Authors whose manuscript has been rejected on other grounds may follow the appeals process
(3.3) if they wish to make an appeal, but note that Editors are unlikely to reverse their original
decision unless significant new information is supplied or it can be demonstrated that our
processes were at fault.

3.2 Retracted articles

Editors do not take the decision to retract articles lightly and will usually have conducted an
extensive investigation before doing so. We will only consider appeals against retractions if
substantial evidence can be provided to demonstrate that the decision was unjust.

3.3 Appeals process

Any appeals against the Editor‘s decision must be made by email


to mededpublish@dundee.ac.uk within two weeks of the decision. You will need to provide a
detailed explanation of why you disagree with the decision and include supporting information.
You should also include the article title and DOI if you are appealing a decision to retract a
published article and the manuscript ID number you are appealing a decision to reject an
unpublished manuscript.

Predatory publishing,

Predatory publishing, sometimes called write-only publishing[1][2] or deceptive


publishing,[3] is an exploitative academic publishing business model that involves charging
publication fees to authors without checking articles for quality and legitimacy and without
providing editorial and publishing services that legitimate academic journals provide,
whether open access or not

Identifying a predator

Deciding if a publisher is predatory is often a matter of evaluating publisher practices against


expectations. While not fool-proof, the 13-warning signs below are evidence based and serve as
a good starting point.

Adapted from Shamseer et al. (2017). Potential predatory and legitimate biomedical journals: can
you tell the difference? A cross-sectional comparison. BMC Medicine. 15:28.
DOI: https://doi.org/10.1186/s12916-017-0785-9

1. The journal's scope of interest includes unrelated subjects alongside legitimate topics.
2. Website contains spelling and grammar errors
3. Images or logos are distorted/fuzzy or misrepresented/unauthorized.
4. Website targets authors, not readers (i.e. publisher prioritizes making money over
product).
5. The Index Copernicus Value (a bogus impact metric) is promoted.
6. There is no clear description of how the manuscript is handled.
7. Manuscripts are submitted by email.
8. Rapid publication is promoted, and promised.
9. There is no article retraction policy.
10. There is no digital preservation plan for content.
11. The APC (article processing charge) is very low (e.g., <$150)
12. A journal that claims to be open access either retains copyright of published research or
fails to mention copyright.
13. Contact email address is non-professional and non-journal/publisher affiliated (e.g.,
@gmail.com, or @yahoo.com)

Predatory journals are a global threat. They accept articles for publication — along with authors‘
fees — without performing promised quality checks for issues such as plagiarism or ethical
approval. Naive readers are not the only victims. Many researchers have been duped into
submitting to predatory journals, in which their work can be overlooked.

When ‗Jane‘ turned to alternative medicine, she had already exhausted radiotherapy,
chemotherapy and other standard treatments for breast cancer. Her alternative-medicine
practitioner shared an article about a therapy involving vitamin infusions. To her and her
practitioner, it seemed to be authentic grounds for hope. But when Jane showed the article to her
son-in-law (one of the authors of this Comment), he realized it came from a predatory journal —
meaning its promise was doubtful and its validity unlikely to have been vetted.
Predatory journals are a global threat. They accept articles for publication — along with authors‘
fees — without performing promised quality checks for issues such as plagiarism or ethical
approval. Naive readers are not the only victims. Many researchers have been duped into
submitting to predatory journals, in which their work can be overlooked. One study that focused
on 46,000 researchers based in Italy found that about 5% of them published in such outlets1. A
separate analysis suggests predatory publishers collect millions of dollars in publication fees that
are ultimately paid out by funders such as the US National Institutes of Health (NIH)2.

One barrier to combating predatory publishing is, in our view, the lack of an agreed definition.
By analogy, consider the historical criteria for deciding whether an abnormal bulge in the aorta,
the largest artery in the body, could be deemed an aneurysm — a dangerous condition. One
accepted definition was based on population norms, another on the size of the bulge relative to
the aorta and a third on an absolute measure of aorta width. Prevalence varied fourfold
depending on the definition used. This complicated efforts to assess risk and interventions, and
created uncertainty about who should be offered a high-risk operation3.

The definition

The consensus definition reached was: ―Predatory journals and publishers are entities that
prioritize self-interest at the expense of scholarship and are characterized by false or misleading
information, deviation from best editorial and publication practices, a lack of transparency,
and/or the use of aggressive and indiscriminate solicitation practices.‖ Example given below

 Academic Exchange Quarterly


 Academic Research Reviews
 Academy of Contemporary Research Journal (AOCRJ)
 ACME Intellects
 Acta de Gerencia Ciencia (CAGENA)
 Acta Advances in Agricultural Sciences (AAAS)
 Acta Kinesiologica
Open access publishing

Open access is a broad international movement that seeks to grant free and open online access to
academic information, such as publications and data. A publication is defined 'open access' when
there are no financial, legal or technical barriers to accessing it - that is to say when anyone can
read, download, copy, distribute, print, search for and search within the information, or use it
in education or in any other way within the legal agreements.

There are different ways of publishinging open access:

 The golden route:

1) Full Open Access journals: publication via publisher platforms, in full open access journals.
This route may involve a charge.

2) Hybrid Journals: publication via ‗hybrid‘ journals. These journals are subscription journals
that allow open access publication of individual articles on payment of an Article Processing
Charge (APC).

The green route: the full text of academic publications is deposited in a trusted repository, a
publicly accessible database managed by a research organisation.

The diamond route: publication via diamond journals/platforms that do not charge author-facing
publication fees (APCs). Diamond open access journals are usually funded via library subsidy
models, institutions or societies.

Open Access Initiatives

Open Access Initiatives in India OA was initiated in the developed countries and later many
developing countries including India have joined the effort. In the wake of the open access
movement, some policy frameworks have already been established by member communities to
foster inclusive, plural and development oriented

knowledge societies, A number of open access declarations /statement were made during the past
decade, where the world leading research institutions agreed on the open access mandates. The
United Nations – backed world summit on the Information Society (WSIS) strongly supported
open access to information and Knowledge. Thus confirms that number countries of the United
Nations will take appropriate strategic decisions to bring scholarly literature, produced from
public fund research initiatives or state-supported researchers, under the umbrella of open
Access. Some of the major open statements or declarations made during the past decade are
given below:

ARIIC Open Access Statement (Australian Research Information Infrastructure Committee)

 [www.caul.edu.au/scholcomm/OpenAccessARIICstatement.doc] Berlin Declaration on Open


Access to Knowledge in the Sciences and Humanities [http://
 oa.mpg.de/openaccess-berlin/berlindeclaration.html] Bethesda Statement on Open Access
[www.earlham.edu/~peters/fos/bethesda.htm]
 Budapest Open Access Initiative Statement [www.soros.org/openaccess/]
 ERCIM Statement on Open Access (European Research Consortium for Informatics and

SHERPA/RoMEO

SHERPA/RoMEO is a service run by SHERPA to show the copyright and open access self-
archiving policies of academic journals.

The database used a colour-coding scheme to classify publishers according to their self-archiving
policy.[1] This shows authors whether the journal allows preprint or postprint archiving in their
copyright transfer agreements.[2] It currently holds records for over 22,000 journals.[3] The colour
codes were retired in 2020, with the launch of a new site.
Example for complaint

Paper published without permission or acknowledgement from institution


Simultaneous submission without aiming at duplicate publication
Publication of data without permission
Paper submitted for publication without consent or knowledge of co-authors
Failure to ask permission
06:Data Base and Research Metrics

. Indexing databases

The prestige of any journal is considered by how many abstracting and indexing services cover
that journal. It has been observed in last few years that authors have started searching for indexed
journals to publish their articles. Probably this is happening because it has become a mandatory
requirement for further promotions of teaching faculty in medical colleges and institutions.
However, the big question is after all what is an ―Index Journal‖? Is a journal considered indexed
if it is documented in a local database, regional database, or in any continental database? Based
on available literature, we would like to clear in few forthcoming paragraphs what is the history
of indexing, what is actual indexing, and what is non indexing?

Citation index (indexing) is an ordered list of cited articles, each accompanied by a list of citing
articles.The citing article is identified as source and the cited article as reference. An abstracting
and indexing service is a product, a publisher sells, or makes available. The journal contents are
searchable using subject headings (keywords, author's names, title, abstract, etc.,) in available
database.2 Being represented in the relevant online abstracting and indexing services is an
essential factor for the success of a journal. Today search is done online, so it is imperative that a
journal is represented in the relevant online search system. A citation index is a kind of
bibliographic database, an index of citation between publications, allowing the user to easily
establish which later documents, cite which earlier documents.

Web of Science

Clarivate Analytics' Web of Science is an online subscription-based citation indexing service


which gives access to multiple databases that reference cross-disciplinary research and which
allows for comprehensive citation search and in-depth exploration of specialized sub-fields
within a scientific discipline. It consists of 6 core databases, a number of specialist collections, as
well as regional databases and currently contains more than 160 million records and over 1.7
billion cited references. A select number of Atlantis Press journals and proceedings is indexed in
Web of Science databases such as the Science Citation Index Expanded (SCIE), the Emerging
Sources Citation Index (ESCI) and the Conference Proceedings Citation Index (CPCI).

Scopus

Elsevier's Scopus is the world's largest abstract and citation database of peer-reviewed scientific
journals, books and conference proceedings which covers research topics across all scientific,
technical and medical disciplines. The database currently contains more than 75 million records
and over 1.4 billion cited references, while it also offers various smart tools and metrics to track,
analyze and visualize research. At present a select number of Atlantis Press journals and
proceedings is indexed in Scopus and a number of applications are in progress.
MEDLINE

MEDLINE is the U.S. National Library of Medicine premier bibliographic database that contains
more than 25 million references to journal articles in life sciences with a concentration on
biomedicine. The subject scope of MEDLINE is biomedicine and health, broadly defined to
encompass those areas of the life sciences, behavioral sciences, chemical sciences and
bioengineering needed by health professionals and others engaged in basic research and clinical
care, public health, health policy development or related educational activities. MEDLINE also
covers life sciences vital to biomedical practitioners, researchers and educators, including aspects
of biology, environmental science, marine biology, plant and animal science as well as
biophysics and chemistry. A select number of Atlantis Press journals in health and medical
sciences is indexed in MEDLINE.

PubMed Central (PMC)

PubMed Central (PMC) is a free full-text digital archive of biomedical and life sciences journal
literature which has been developed and operated by the National Center for Biotechnology
Information (NCBI), a division of the U.S. National Library of Medicine (NLM) at the U.S.
National Institutes of Health (NIH). As of today, PMC contains more than 5.9 million full-text
articles spanning several centuries of biomedical and life science research (late 1700s to present).
Participation by publishers in PMC is voluntary, although participating journals must meet
certain scientific and technical standards and content must be deposited as per the NIH Public
Access Policy. A select number of Atlantis Press journals in health and medical sciences is
indexed in PMC.

Directory of Open Access Journals (DOAJ)

The Directory of Open Access Journals (DOAJ) is a community-curated online directory of open
access journals which aims to be the starting point of all information searches for quality, peer-
reviewed, open access material. DOAJ's mission is to increase the visibility, accessibility,
reputation, usage and impact of quality, peer-reviewed, open access scholarly research journals
globally, regardless of discipline, geography or language. At present, the directory contains more
than 14,000 open access journals from 133 countries and more than 4.6 million open access
articles covering all areas of science, technology, medicine, social science and humanities. All
Atlantis Press journals are indexed in DOAJ.

Citation databases

Citation databases are collections of referenced papers/ articles/ books and other material entered
into an online system (database) in a structured and consistent way. All the information relating
to a single document (author, title, publication details, abstract, and perhaps the full text) make
up the ‗record‘ for that document. Each of these items of information becomes a separate ‗field‘
in that record and enables the document to be retrieved via any of these items, or by keywords.
Why use a citation database?

A citation database allows you to access published, peer-reviewed, high-quality material such as
journal articles, research reports, systematic reviews, conference proceedings, editorials, and
related works. When a document is originally entered into a database it is analysed for its key
subjects, and descriptors (MeSH terms in MEDLINE, PubMed etc.) are assigned to it. MeSH
terms are Medical Subject Headings, which is a controlled vocabulary thesaurus used for
indexing and cataloguing articles for medical and biomedical purposes. These MeSH terms allow
precise searching as the databases search for these specific terms in a hierarchical order.

Searches can then be limited, for example, by author or title fields, or year/s of publication, and
keywords can be focused and searched separately. Searches undertaken in citation databases are
therefore more precise, and comprehensive than searches on general internet search engines and
the results are of consistently higer quality and reliability.

Purpose

You can use a citation database to:

 distinguish between authors with the same name, or an author's name that has been
presented in different ways
 analyse search results to show the number of documents broken down by various criteria,
including year, author, source, affiliation, or subject categories
 search within results by adding additional terms to the initial search
 identify highly cited works related to a particular topic
 find related works that share references or authors
 create search alerts to keep up to date with developments in your discipline
 set up citation alerts to notify you when a document or author is cited elsewhere
 set up alerts to notify you about new documents by an author
 generate a profile that presents an analysis and citation summary of works published by
an institution or author(s), including h-index
 compare the performance of journals in a particular subject area.

Search a database

The Scopus and Web of Science databases share a number of similar features, but differ in the
sources cited and coverage. Both databases focus on English language publications.

Scopus

Coverage

The Scopus database contains records from 1969 including science, mathematics, engineering,
technology, health and medicine, social sciences, arts and humanities.
 22,800 peer reviewed journals, including 3,800 open access titles.
 280 trade publications.
 Articles in press [accepted for publication] from more than 8,000 publishers.
 150,000 books from Science, Technology & Medicine (2005-present ) and Arts &
Humanities (2003-present).
 8 million conference papers from 100,000 conferences.
 39 million patents.

Web of Science (WoS)

Coverage

The WoS database contains records from 1900 including sciences, social sciences, arts and
humanities.

 20,300 peer reviewed journals.


 94,000 scholarly books (2005-present).
 10 million conference papers.
 coverage in some fields is less complete than in others, and there is an apparent focus in
the sciences.

What are research metrics?

Research metrics are the fundamental tools used across the publishing industry to measure
performance, both at journal- and author-level.

Research metrics are measures used to quantify the influence or impact of scholarly work.
Some examples of this are bibliometrics (methods to analyze and track scholarly
literature), citation analysis, and altmetrics (a more recent set of alternative methods
that attempt to track and analyze scholarship through various digital media.)

For a long time, the only tool for assessing journal performance was the Impact Factor – more on
that in a moment. Now there are a range of different research metrics available. This ―basket of
metrics‖ is growing every day, from the traditional Impact Factor to Altmetrics, h-index, and
beyond.

Citation-based metrics

Impact Factor

The Impact Factor is probably the most well-known metric for assessing journal performance.
Designed to help librarians with collection management in the 1960s, it has since become a
common proxy for journal quality.
The Impact Factor is a simple research metric: it‘s the average number of citations received by
articles in a journal within a two-year window.

he Web of Science Journal Citation Reports (JCR) publishes the official results annually, based
on this calculation:

Number of citations received in one year to content published in Journal X during the two
previous years, divided by the total number of articles and reviews published in Journal X within
the previous two years.

For example, the 2017 Impact Factors (released in 2018) used the following calculation:

Number of citations received in 2017 to content published in Journal X during 2015 and
2016, divided by the total number of articles and reviews published in Journal X in 2015 and
2016.

Eigen factor

The Eigenfactor measures the influence of a journal based on whether it‘s cited within other
reputable journals over five years. A citation from a highly-cited journal is worth more than from
a journal with few citations.

To adjust for subject areas, the citations are also weighted by the length of the reference list that
they‘re from. The Eigenfactor is calculated using an algorithm to rank the influence of journals
according to the citations they receive. A five-year window is used, and journal self-citations are
not included.

This score doesn‘t take journal size into account. That means larger journals tend to have larger
Eigenfactors as they receive more citations overall. Eigenfactors also tend to be very small
numbers as scores are scaled so that the sum of all journal Eigenfactors in the JCR adds up to
100.

Very roughly, the Eigenfactor calculation is:

Number of citations in one year to content published in Journal X in the previous five years
(weighted), divided by the total number of articles published in Journal X within the previous
five years.

What is CiteScore?
CiteScore is the ratio of citations to research published. It‘s currently available for journals and
book series which are indexed in Scopus. CiteScore considers all content published in a journal,
not just articles and reviews.

CiteScore was produced by Scopus in December 2016 and you can easily replicate it via the
Scopus database. In addition to CiteScore, Scopus also publish additional rankings, such as the
CiteScore percentile based on subject categories, and a monthly CiteScore tracker.

The CiteScore calculation is:

Number of all citations recorded in Scopus in one year to content published in Journal X in the
last three years, divided by the total number of items published in Journal X in the previous three
years.

Journals that publish a large amount of front matter (such as editorials or peer commentaries)
will perform worse by CiteScore than by Impact Factor because this front matter is rarely cited.

What are the differences between CiteScore and Impact Factor?

1. CiteScore is based on the Scopus database rather than Web of Science. This means the
number of citations and journal coverage in certain subject areas is notably higher.

2. CiteScore uses a three-year citation window, whereas Impact Factor uses a two-year
citation window.

3. The CiteScore denominator includes all content published in the journal. The Impact
Factor denominator includes only articles and reviews.

4. CiteScore covers all subject areas, whereas the Impact Factor is only available for
journals indexed in the SCIE and SSCI.

CiteScore suffers from some of the same problems as Impact factor; namely that it isn‘t
comparable across disciplines and it is a mean calculated from a skewed distribution.

SNIP - Source Normalized Impact per Paper

SNIP is a journal-level metric which attempts to correct subject-specific characteristics,


simplifying cross-discipline comparisons between journals. It measures citations received against
citations expected for the subject field, using Scopus data. SNIP is published twice a year and
looks at a three-year period.

The SNIP calculation is:

Journal citation count per paper, divided by citation potential in the field.
SNIP normalizes its sources to allow for cross-disciplinary comparison. In practice, this means
that a citation from a publication with a long reference list has a lower value.

SNIP only considers citations to specific content types (articles, reviews, and conference papers),
and does not count citations from publications that Scopus classifies as ―non-citing sources‖.
These include trade journals, and many Arts & Humanities titles.

SJR - Scimago Journal Rank

The SJR aims to capture the effect of subject field, quality, and reputation of a journal on
citations. It calculates the prestige of a journal by considering the value of the sources that cite it,
rather than counting all citations equally.

Each citation received by a journal is assigned a weight based on the SJR of the citing journal.
So, a citation from a journal with a high SJR value is worth more than a citation from a journal
with a low SJR value.

The SJR calculation is:

Average number of (weighted) citations in a given year to Journal X, divided by the number of
articles published in Journal X in the previous three years.

As with SNIP and CiteScore, SJR is calculated using Scopus data.

IPP - Impact Per Publication: Also known as RIP (raw impact per publication), the IPP is used
to calculate SNIP. IPP is a number of current-year citations to papers from the previous 3 years,
divided by the total number of papers in those 3 previous years

h-index

What is the h-index?

The h-index is an author-level research metric, first introduced by Hirsch in 2005. The h-index
attempts to measure the productivity of a researcher and the citation impact of their publications.

The basic h-index calculation is:

Number of articles published which have received the same number of citations.

For example, if you‘ve published at least 10 papers that have each been cited 10 times or more,
you will have a h-index of 10.

What are the advantages of the h-index?

 Results aren’t skewed


The main advantage of the h-index is that it isn‘t skewed upwards by a small number of highly-
cited papers. It also isn‘t skewed downwards by a long tail of poorly-cited work.

The h-index rewards researchers whose work is consistently well cited. That said, a handful of
well-placed citations can have a major effect.

What are the disadvantages of the h-index?

 Results can be inconsistent

Although the basic calculation of the h-index is clearly defined, it can still be calculated using
different databases or time-frames, giving different results. Normally, the larger the database, the
higher the h-index calculated from it. Therefore, a h-index taken from Google Scholar will nearly
always be higher than one from Web of Science, Scopus, or PubMed. (It‘s worth noting here that
as Google Scholar is an uncurated dataset, it may contain duplicate records of the same article.)

 Results can be skewed by self-citations

Although some self-citation is legitimate, authors can cite their own work to improve their h-
index.

 Results aren’t comparable across disciplines

The h-index varies widely by subject, so a mediocre h-index in the life sciences will still be
higher than a very good h-index in the social sciences. We can‘t benchmark h-indices because
they are rarely calculated consistently for large populations of researchers using the same
method.

 Results can’t be compared between researchers

The h-index of a researcher with a long publication history including review articles cannot be
fairly compared with a post-doctoral researcher in the same field, nor with a senior researcher
from another field. Researchers who have published several review articles will normally have
much higher citation counts than other researchers.

Altmetrics

What are altmetrics?

Alternative metrics (or ―altmetrics‖) help you to measure the impact of a journal by looking at
the social activity around it. They use quantitative and qualitative data alongside traditional
citation- and usage-based metrics to provide an insight into the attention, influence and impact of
academic research.
The most common method of reporting on altmetrics is the Altmetric Attention Score. This tool
tracks a wide range of online sources to capture the conversations happening around academic
research.

How is the Altmetric Attention Score calculated?

Altmetric monitors each online mention of a piece of research and weights the mentions based
on volume, sources, and authors. A mention in an international newspaper contributes to a higher
score than a tweet about the research, for example.

The Altmetric Attention Score is presented within a colorful donut. Each color indicates a
different source of online attention (ranging from traditional media outlets to social media, blogs,
online reference managers, academic forums, patents, policy documents, the Open Syllabus
Project, and more). A strong Altmetric Score will feature both a high number in the center, and a
wide range of colors in the donut.

Discover the different ways you can make Altmetric data work for you by reading this
introduction from Altmetric‘s Head of Marketing, Cat Chimes.

What are the advantages of the Altmetric Attention Score?

 Receive instant, trackable feedback

Altmetric starts tracking online mentions of academic research from the moment it‘s published.
That means there‘s no need to wait for citations to come in to get feedback on a piece of
research.

 Get a holistic view of attention, impact and influence

The data Altmetric gathers provides a more all-encompassing, nuanced view of the attention,
impact, and influence of a piece of research than traditional citation-based metrics. Digging
deeper into the Altmetric Attention Score can reveal not only the nature and volume of online
mentions, but also who‘s talking about the research, where in the world these conversations are
happening, and which online platforms they‘re using

05 Software tools

Turnitin

When you open the report, the whole thing the document is shown in the web Browser.

Sources with which to Similarities are found Listed on theright side of the screenTurnitin
is not detected tricks that have been used to mislead the plagiarism scanners.
Turnitin is not showing text 'Detailed' The application only shows the text that was
submitted by the student, and highlights parts of that text that were found to be a match.

Turnitin lists all of the sources with which there was a match Found on the right, next to t
he one submitted this document. When you use the view of 'all sources,'sources may be

selected and 'Exclude the source'.The button below could be pressed. The source is not longer vie
wed as a similarity, and not more adds similarity to that percentage of.

Urkund

When the report is opened, the analysis overview page is shown. To view the whole
document, you can click the entire document tab.

Sources with which similarities have been found can be seen after click on matching

text in the findings. Each match is shown on separate a page. At the bottom of the screen, you
will see a previous highlight and a next highlight button, that will bring you to the page for the
previous or next similarity.

In addition to showing matching text, Urkund can also display warnings when the textsee
ms suspicious. Students will sometimes try to use tricks to mislead them. Scanner of plagiarism

Urkund shows 'detailed' differences in text. Text from the text submitted The document is
shown in orange on the left. The text matching is shown to the right. The colours pink and yello
w are used to give details.

Best practice/ standard setting initiatives and guidelines: cope, wame

The Committee on Publication Ethics (COPE), the Directory of Open Access Journals (DOAJ),
the Open Access Scholarly Publishers Association (OASPA), and the World Association of
Medical Editors (WAME) are scholarly organizations that have seen an increase in the number,
and broad range in the quality, of membership applications. Our organizations have collaborated
to identify principles of transparency and best practice for scholarly publications and to clarify
that these principles form the basis of the criteria by which suitability for membership is assessed
by COPE, DOAJ and OASPA, and part of the criteria on which membership applications are
evaluated by WAME.

Each organization also has their own, additional criteria which are used when evaluating
applications. The organizations will not share lists of publishers or journals that failed to
demonstrate that they met the criteria for transparency and best practice.

This is the third version of a work in progress (published January 2018); the first version was
made available in December 2013 and a second version in June 2015. We encourage its wide
dissemination and continue to welcome feedback on the general principles and the specific
criteria. Background on the organizations is below.

Principles of Transparency

1. Website: A journal‘s website, including the text that it contains, shall demonstrate that care
has been taken to ensure high ethical and professional standards. It must not contain information
that might mislead readers or authors, including any attempt to mimic another journal/publisher‘s
site.

2. Name of journal: The Journal name shall be unique and not be one that is easily confused
with another journal or that might mislead potential authors and readers about the Journal‘s
origin or association with other journals.

3. Peer review process: Journal content must be clearly marked as whether peer reviewed or
not. Peer review is defined as obtaining advice on individual manuscripts from reviewers expert
in the field who are not part of the journal‘s editorial staff.

4. Ownership and management: Information about the ownership and/or management of a


journal shall be clearly indicated on the journal‘s website. Publishers shall not use organizational
or journal names that would mislead potential authors and editors about the nature of the
journal‘s owner.

5. Governing body: Journals shall have editorial boards or other governing bodies whose
members are recognized experts in the subject areas included within the journal‘s scope

Editorial team/contact information: Journals shall provide the full names and affiliations of the
journal‘s editors on the journal website as well as contact information for the editorial office,
including a full address.

7. Copyright and Licensing: The policy for copyright shall be clearly stated in the author
guidelines and the copyright holder named on all published articles. Likewise, licensing
information shall be clearly described in guidelines on the website, and licensing terms shall be
indicated on all published articles, both HTML and PDFs.

8. Author fees: Any fees or charges that are required for manuscript processing and/or
publishing materials in the journal shall be clearly stated in a place that is easy for potential
authors to find prior to submitting their manuscripts for review or explained to authors before
they begin preparing their manuscript for submission. If no such fees are charged that should
also be clearly stated.

Metric Website Calculation Meaning


Use a two-year period to divide the number of
times articles were cited by the number of
articles that were published
Example:
Impact Journal 200 = the number of times articles published in Impact factor reflects only
Factor Citation 2018 and 2019 were cited by indexed journals on how many citations on
Reports during 2020. a specific journal there are
73 = the total number of "citable items" (on average). A journal
published in 2018 and 2019. with a high impact factor
200/73 = 2.73 has articles that are cited
2020 impact factor often.

1) Create a list of all of your publications.


organize articles in descending order, based on
the number of times they have been cited.
2) Look down through the list to figure out at
Web of what point the number of times a publication
h-index Science, G has been cited is equal to or larger than the line The h-index focuses more
oogle (or paper) number of the publication. specifically on the impact
Scholar, S of only one scholar instead
copus of an entire journal. The
higher the h-index, the
more scholarly output a
researcher has.

*please remember that many databases will


give you this number; this is only if you'd like
to calculate it manually. You can also often
find calculators online.
*graphic courtesy of the University of
Waterloo Libguide

The g-index can be


thought of as a
Harzing's Given a list of articles ranked in decreasing continuation of the h-
g-index Publish or order of the number citations that they index. The difference is
Perish received, the g-index is the largest unique that this index puts more
number to the extent that the top g articles weight on highly-cited
received together is at least g2 citations. citations. The g-index was
created because scholars
noticed that h-index
ignores the number of
citations to each individual
article beyond what is
needed to achieve a
certain h-index. This
number often
complements the h-index
and isn't necessarily a
replacement.

The Eigenfactor score is calculated by


eigenfactor.org. However, their process is very A high Eigenfactor score
similar to calculating impact factor and they signals that the journal
Eigen Eigenfacto pull their data from the JCR as well. The major does not self-cite and
factor r.org difference is that the Eigenfactor score deletes controls the network of
score references from one article in a journal to that discipline. It's useful
another in the same journal. This eliminates the to look at scholar's h-
problem of self-citing. The Eigenfactor score is index as well as the
also a five-year calculation. More information Eigenfactor score of the
can be found through Journal Citation Reports. journals they publish in in
order to get a broad sense
of their impact as a
researcher.

Different sources go into


altmetrics calculations,
Altmetric scores are usually calculated by depending on the company
companies. This means that they can't be and the information that
Altmetric Altmetric. calculated manually. To see an explanation of they are using. But in
com how this metric is caculated, you can visit general, a high altmetric
the Altmetric support page. score indicates that an
item has received a lot of
attention and it has also
received what that
company has decided is
"quality" attention (i.e. a
news post might be more
valuable than a twitter
mention). Remember that
attention doesn't
necessarily indicate that
the article is important.

Tools for Identifying Journals

Some databases and publishers have already done the work for you by compiling lists and
databases specifically designed to examine impact:

• Elsevier Journal Finder

The Elsevier Journal Finder helps you find journals that are best suited for publishing your
scientific article.

• Springer Journal Selector

The Springer Journal Selector helps you find journals suitable for publishing your research in
such fields as: philosophy, law, engineering, food science and nutrition, and more.

• Harzing Publish or Perish Journal Quality List

The Journal Quality List is a collation of journal rankings from a variety of sources. It is
published to assist academics to target papers at journals of an appropriate standard.

Sherpa Romeo

Sherpa Romeo is an online resource that aggregates and presents publisher and journal open
access policies from around the world. Every registered publisher or journal held in Romeo is
carefully reviewed and analysed by our specialist team who provide summaries of self-archiving
permissions and conditions of rights given to authors on a journal-by-journal basis where
possible.

There are three versions of the manuscript considered in SHERPA/RoMEO:

1. pre-print, which is the manuscript version before peer review;

2. post-print, which is the manuscript as accepted after peer review but not yet typeset as an
article in the journal; and typeset manuscript/publisher‘s manuscript, which is the manuscript
with the text after peer-review, fully typeset, as it appears in the journal.
RoMEO Colour Archiving policy

Green Can archive pre-print and post-print or publisher's version/PDF

Blue Can archive post-print (ie final draft post-refereeing) or publisher's


version/PDF

Yellow Can archive pre-print (ie pre-refereeing)

White Archiving not formally supported

Some journals accept the archival only of the pre-print, while others accept both pre-print and
post-print, or even accept the archival of all three versions! SHERPA/RoMEO‘s API lets you
know what is the policy of a journal using its name, or its ISSN, and whether restrictions apply
such as embargo periods before publicly archiving different manuscript versions.

But even though the database is still updated, it seems the development of the API stopped in
2013, which means it‘s lacking some functionalities and it does not always follow modern web
standards. Because of this, we could not always readily use R packages but we often needed to
perform small adjustments first. For example, it did not always use valid XML and the character
encoding was not declared in the HTTP headers, but in the body of the document. Furthermore,
the SHERPA/RoMEO is not RESTful and thus the queries were a little more complex to design.
Fortunately, the developers had written a full documentation of all different types of query we
could run.

rromeo lets you access basic information regarding the journal policies in R. You can get the
policy of a specific journal with its title using the function

You might also like