You are on page 1of 6

Q.

What is the subject wise percentage of copied work allowed as per norms of
plagiarism?

A. The definition of plagiarism has been defined in Section 2 (k) of UGC Act 1956, the
regulation as, “…an act of academic dishonesty and a breach of ethics. It involves using
someone else’s work as one’s own. It also includes data plagiarism and self-plagiarism.”
Plagiarism is one of the biggest issues that educational institutions and academia are facing, not
only due to the fact that it is basically stealing someone else’s work, but also because this
practice is so widespread in higher education, be it Ph.D. scholars or the average undergraduate.
Plagiarism is a stain upon the legitimacy of education in India, thus the UGC has recently drafted
certain regulations to control and punish plagiarism. There is a lack of consensus or clear-cut-
rules on what percentage of plagiarism is acceptable in a manuscript. Going by the convention,
usually a text similarity below 15% is acceptable by the journals and a similarity of >25% is
considered as high percentage of plagiarism.
But even in case of 15% similarity, if the matching text is one continuous block of borrowed
material, it will be considered as plagiarized text of significant concern. On the other hand, text
similarity due to the usage of common terminologies and method related details in
‘Methodology’ part of a manuscript should not raise a serious ethical concern.
Different universities, schools, and colleges have different requirements for the percentage of
plagiarism in assignments. Some of them allow students to have not more than 15% of similarity.
Some prohibit any plagiarism. As for me, teachers can detect real plagiarism only if they check
the similarity report AND review the results found. Some results may include terminology,
numbers, common names, etc. Thus, it can’t be considered as plagiarism, although the plagiarism
checking tool shows it in the report.

Q. Name the Software used for Plagiarism detection.

A. Turnitin Plagiarism Checker Turnitin software has the largest collection of academic and
all forms of content on the internet to check for even the slightest possibility of academic
misconduct and content duplicity. It is widely used by academic professionals to ensure
content originality and ensure the educational excellence. Further, the feedback and grading
features empower students to take up any new content task with confidence.
Features of software,
 Instant analysis of content originality with citations and sources
 Duplicity results presented in a simple format
 Also helps with grammar and spelling checks
 Provides suggestions for better word usage
 Write Check by Turnitin also provides the similarity score
Q. Explain the terms COPE, WAME with reference to best practices.

A. COPE and WAME are Committee on Publication Ethics is scholarly organization that have
seen an increase in the number, and broad range in the quality, of membership applications. Our
organizations have collaborated to identify principles of transparency and best practice for
scholarly publications and to clarify that these principles form the basis of the criteria by which
suitability for membership is assessed by COPE, DOAJ and OASPA, and part of the criteria on
which membership applications are evaluated by WAME.

COPE: COPE provides advice to editors and publishers on all aspects of publication ethics and,
in particular, how to handle cases of research and publication misconduct. It also provides a
forum for its members to discuss individual cases. COPE does not investigate individual cases
but encourages editors to ensure that cases are investigated by the appropriate authorities
(usually a research institution or employer). All COPE members are expected to apply COPE
principles of publication ethics outlined in the core practices.

WAME:  World Association of Medical Editors, is a global nonprofit voluntary association of


editors of peer-reviewed medical journals who seek to foster cooperation and communication
among editors; improve editorial standards; promote professionalism in medical editing through
education, self-criticism, and self-regulation; and encourage research on the principles and
practice of medical editing. WAME develops policies and recommendations of best practices for
medical journal editors and has a syllabus for editors that members are encouraged to follow.

Q. Write in brief about indexing databases.

A. A database index is a physical access structure for a database table that tells the database of
where records are physically stored on the disk. Let us consider a textbook to understand the
concept of an index. In order to find a particular word, the reader can either read the book and
until he finds what he is seeking, or, alternatively, he can search in the table of contents and go
directly to the desired word. A database index functions similar to a textbook index. Adding
appropriate indexes to large database tables is the most important aspect of database
optimization. A database index is a data structure that improves the speed of data retrieval
operations on a database table at the cost of slower writes and increased storage space. Creation
of indexes involves one or more columns of a database table, which provides random lookups
and efficient access of ordered records. The database index requires comparatively less storage
space than the original table. Database
indexing can be broadly classified into
following two categories:
1) Non-Clustered Indexing
2) Clustered Indexing
Non-Clustered Indexing
The data is present in random order in a database table, but the logical ordering is specified by
the non-clustered index. The data rows may be randomly spread throughout the database table.
The non clustered index tree contains the index keys in sorted order, with the leaf level of the
index containing the pointer to the page and the row number in the data page. In non-clustered
index: The physical order of the rows in the database table is not the same as the index order.
Typically created on column used in JOIN, WHERE, and ORDER BY clauses. Good for tables
whose values are modified frequently. Microsoft SQL Server creates non-clustered indexes by
default when CREATE INDEX command is given. There can be more than one non-clustered
index on a database table. In SQL Server 2005 there can be as many as 249 non clustered
indexes per table and 999 non clustered indexes per
table in SQL Server 2008. It also creates a clustered index on a primary key by default.
Clustered Indexing
Clustering alters the database into a certain distinct order to match the index, resulting
in the row data being stored in order. Therefore, only one clustered index can be created on a
given database table. Clustered indexes greatly increases overall speed of retrieval, but usually
only where the data is accessed sequentially in the same or reverse order of the clustered index,
or when a range of items is selected. Since the physical records are in sorted order on disk, the
next row item in the sequence is immediately before or after the last one, and so fewer data block
reads are required. The primary feature of a clustered index is the ordering of the physical data
rows in accordance with the index blocks that point to them. Some databases separate the data
and index blocks into separate files; others put them into completely different data blocks within
the same physical file. Create an object where the physical order of rows is same as the index
order of the rows and the bottom level of clustered index contains the actual data rows. These
are known as index organized tables in Oracle.
Indexes in Oracle
In Oracle the performance of the database queries can be improved by introducing the various
types of indexes in the databases. The following types of indexes are supported by the Oracle.
B-tree indexes: the default and the most common B-tree cluster indexes: defined specifically for
cluster Hash cluster indexes: defined specifically for a hash cluster Global and local indexes:
relate to partitioned tables and indexes Reverse key indexes: most useful for Oracle Real
Application Clusters applications Bitmap indexes: compact; work best for columns with a small
set of values

Q. Explain the meaning of term ‘Impact Factor of Journal’ as per citation report, SNIP, SJR,
IPP and Cite Score.
A. Impact Factor: The impact factor is a measure of the frequency with which the "average
article" in a journal has been cited in a particular year or period. The annual JCR impact
factor is a ratio between citations and recent citable items published. The impact factor (IF) of a
scientific journal is a measure reflecting the average number of citations to papers published in
that journal. This indicator measures the relative importance of a journal within its scientific
field, with journals with higher impact factors deemed to be more important than those with
lower ones. In a given year, the IF of a journal is the average number of citations received per
article published in that journal during the 2 preceding years. IFs are calculated each year by
Thomson scientific for those journals that it indexes, and are published in Journal Citation
Reports. Impact factor can be calculated after completing the minimum of 3 years of publication;
for that reason journal IF cannot be calculated for new journals. The journal with the highest IF
is the one that published the most commonly cited articles over a 2-year period. For example, if a
journal has an IF of 3 in 2008, then its papers published in 2006 and 2007 received three
citations each on average in 2008. The 2008 IFs are actually published in 2009; they cannot be
calculated until all of the 2008 publications have been processed by the indexing agency
(Thomson Reuters).  The IF of any journal may be calculated by the formula; 2012 impactfactor
=A/B, Where A is the number of times articles published in 2010 and 2011 were cited by
indexed journals during 2012. B is the total number of citable items like articles and reviews
published by that journal in 2010 and 2011. SCImago Journal Rank (SJR) is based on the
concept of a transfer of prestige between journals via their citation links. Drawing on a similar
approach to the Google PageRank algorithm - which assumes that important websites are linked
to from other important websites - SJR weights each incoming citation to a journal by the SJR of
the citing journal, with a citation from a high-SJR source counting for more than a citation from
a low-SJR source. Like CiteScore, SJR accounts for journal size by averaging across recent
publications and is calculated annually.
Source Normalized Impact per Paper (SNIP) is a sophisticated metric that intrinsically accounts
for field-specific differences in citation practices. It does so by comparing each journal’s
citations per publication with the citation potential of its field, defined as the set of publications
citing that journal. SNIP therefore measures contextual citation impact and enables direct
comparison of journals in different subject fields, since the value of a single citation is greater for
journals in fields where citations are less likely, and vice versa.

CiteScore metrics are a suite of indicators calculated from data in Scopus, the world’s leading
abstract and citation database of peer-reviewed literature. CiteScore itself is an average of the
sum of the citations received in a given year to publications published in the previous three years
divided by the sum of publications in the same previous three years. CiteScore is calculated for
the current year on a monthly basis until it is fixed as a permanent value in May the following
year, permitting a real-time view on how the metric builds as citations accrue. Once fixed, the
other CiteScore metrics are also computed and contextualise this score with rankings and other
indicators to allow comparison.

When calculating SJR, We using h-index

H-index: The h-index is short for the Hirsch index, which was introduced by Jorge E. Hirsch
(2005) as a way to quantify the productivity and impact of an individual author. Similar to how
the IF is now be used to measure a journal or an author to their scientific field, the h-index has
become another measure of relative impact of scientific publications. While the IF is derived
from the quotient of total citations and total papers in a two-year span, the h-index is simply a
count of the largest number of papers (h) from a journal or author that have at least (h) number of
citations. For example, Webology has an h-index of 21 based on Google Scholar which indicates
that the journal has published 21 papers with at least 21 citations

Q. Mention the process for complaints & appeals for research work fraud from India and
Abroad.
A. Levels of Plagiarism in non-core areas, For all other (non-core) cases, plagiarism would be
quan@fied into following levels in ascending order of severity for the purpose of its definition:
Similarities up to 10% .- excluded
Level 1: Similarities above 10% to 40%
Level 2: Similarities above 40% to 60%
Level 3: Similarities above 60%

If any member of the academic community suspects with appropriate proof that a case of
plagiarism has happened in any document, he or she shall report it to the competent/designated
authority of the university.

Q. Give the Significance of tools viz Springer and Scopus.


A. Springer is a leading global scientific, technical and medical portfolio, providing researchers
in academia, scientific institutions and corporate R&D departments with quality content through
innovative information, products and services. Springer has one of the strongest STM and HSS
eBook collections and archives, as well as a comprehensive range of hybrid and open access
journals and books under the Springer Open imprint.
Springer is part of Springer Nature, a global publisher that serves and supports the research
community. Springer Nature aims to advance discovery by publishing robust and insightful
science, supporting the development of new areas of research and making ideas and knowledge
accessible around the world. As part of Springer Nature, Springer sits alongside other trusted
brands like Nature Research, BMC and Palgrave Macmillian.
While Scopus is a large interdisciplinary database from Elsevier, with particular strengths in
science and technology. The bibliometric & citation features use the whole of the Scopus
database. i.e. it is the largest abstract and citation database of peer-reviewed literature: scientific
journals, books and conference proceedings. Even the scopus IDs for individual authors can be
integrated with the non-proprietary digital identifier ORCID. Delivering a comprehensive ov
erview of the world's research output in the fields of science, technology, medicine, social
sciences, and arts and humanities, Scopus features smart tools to track, analyse and visualise
research. Scopus gives four types of quality measure for each title; those are h-Index, CiteScore,
SJR (SCImago Journal Rank) and SNIP (Source Normalized Impact per Paper).

Q. Explain Journal Finder and Journal Suggestion tools like JANE, Elsevier Journal Finder
Springer Journal Suggestion?

A. Metrics have become a fact of life in many - if not all - fields of research and scholarship. In
an age of information abundance (often termed ‘information overload’), having a short hand for
the signals for where in the ocean of published literature to focus our limited attention has
become increasingly important. Research metrics are sometimes controversial, especially when
in popular usage they become proxies for multidimensional concepts such as research quality or
impact. Each metric may o.er a different emphasis based on its underlying data source, method
of calculation, or context of use. For this reason, Elsevier promotes the responsible use of
research metrics encapsulated in two “golden rules”. Those are: always use both qualitative and
Quantitative input for decisions (i.e. expert opinion alongside metrics), and always use more than
one research metric as the quantitative input. This second rule acknowledges that performance
cannot be expressed by any single metric, as well as the fact that all metrics have specific
strengths and weaknesses. Therefore, using multiple complementary metrics can help to provide
a more complete picture and re.ect different aspects of research productivity and impact in the
internal assessment. On this page we introduce some of the most popular citation-based metrics
employed at the journal level. Where available, they are featured in the “Journal Insights”
sidebar on Elsevier journal homepages (for example
(https://www.journals.elsevier.com/international-journal-of antimicrobial agents), which links
through to an even richer set of indicators on the Journal Insights homepage (for example
(https://journalinsights.elsevier.com/journals/0924-8579).
Cite Score metrics are a suite of indicators calculated from data in Scopus, the world’s leading
abstract and citation database of peer-reviewed literature. Cite Score itself is an average of the
sum of the citations received in a given year to publications published in the previous three years
divided by the Home (https://www.elsevier.com) >Authors (https://www.elsevier.com/authors)
Tools and resources (https://www.elsevier.com/authors/tools-and-resources).
Measuring a journals impact (https://www.elsevier.com/authors/tools-and-resources/measuring-
a-journals-i…CiteScore metrics SJRSNIP JIF h-index Search by keyword, title, subject area
(https://www.elsevier.com) (https://www.elsevier.com/search-results) (https://glo sum of
publications in the same previous three years. Cite Score is calculated for the current year on a
monthly basis until it is .xed as a permanent value in May the following year, permitting a real-
time view on how the metric builds as citations accrue. Once .xed, the other Cite Score metrics
are also computed and contextualize this score with rankings and other indicators to allow
comparison. Current: A monthly Cite Score Tracker keeps you up-to-date about latest
progression towards the next annual value, which makes next Cite Score more predictable.
Comprehensive: Based on Scopus, the leading scientific citation database. Clear: Values are
transparent and reproducible to individual articles in Scopus.

You might also like