Policy and compliance
The University of Salford is committed to excellent research with impact, conducted to the highest standards of research integrity. To achieve this and remain committed to our research participants, our research community must operate to the highest standards of research ethics and integrity.
Annual Statement on Compliance with the Concordat to Support Research Integrity
Part of our commitment to the Concordat to Support Research Integrity requires us to publish a statement on our activities to support research governance and integrity at the University. The 2021-22 academic year report is now available.
Guidance
- Biotechnology & Biological sciences Research Council (BBSRC) Safeguarding Good Scientific Practice
- British Psychological Society Code of Ethics and Conduct
- British Sociological Association Guidelines on Ethical Research
- Concordat to Support Research Integrity (UUK)
- Economic & Social Research Council (ESRC) Research Ethics
- Engineering & Physical Sciences Research Council (EPSRC) Ethics
- Export Controls applying to academic research
- Getting Ethics approval & complying with what was approved
- Good Clinical Practice (GCP)
- Nagoya Protocol - An explanatory guide
- Nagoya Protocol - An explanatory presentation
- Nagoya Protocol - Checklist for Researchers
- National Security and Investment Act (2021)
- Society of Radiographers Code of Professional
- Trusted Research (CPNI)
- UK Policy Framework for Health & Social Care Research
- UK Research & Innovation (UKRI) Research Integrity
- UK Research Integrity Office Code of Practice for Research
- Wellcome Trust EPQ Student Guide to Ethics
- WMA Declaration of Helsinki – Ethical principles for medical research involving human subjects
- World Health Organization – Ethics and Health
Internally (network login), we also have a Research & Enterprise Hub, a Staff/PGR Academic Ethics Hub and a Student Academic Ethics Hub. The Hubs and have lots of information on the above and to support the production of excellent research with integrity.
Nagoya Protocol
“The Nagoya Protocol on Access to Genetic Resources and the Fair and Equitable Sharing of Benefits Arising from their Utilization to the Convention on Biological Diversity is an international agreement which aims at sharing the benefits arising from the use of genetic resources in a fair and equitable way”
Each country has rights over its genetic resources (such as animals, plants and organisms) and the traditional knowledge associated with them. The Nagoya Protocol was designed to ensure the equitable sharing of these genetic resources and their associated traditional knowledge (ATK) and the benefits that arise from their use.
Established in 2010, the Nagoya Protocol puts the Access and Benefits Sharing (ABS) principles from the Convention of Biological Diversity into a legally binding contract.
From 21 October 2014, anyone wishing to access genetic resources and/or the traditional knowledge associated with them, must comply with the EU regulation.* The regulations in the UK apply to any company, organisation or individual conducting research and development on genetic resources and or ATK where:
- The genetic material and / or ATK was accessed on or after 12 October 2014, and
- Was from a country that is party to the Nagoya Protocol and has access and benefit sharing (ABS) legislation
It does not apply to:
- Human genetic resources
- Genetic resources for which access and benefit-sharing is governed by specialised international instruments (such as the International Treaty on Plant Genetic Resources for Food and Agriculture)
The Protocol does not apply to activities that took place before the regulation came into place.
For the most up-to-date information on the Nagoya Protocol and complying with the agreement the Convention on Biological Diversity and UK Government websites should be the first point of reference.
* On 15 November 2018, the Department for Environment, Food & Rural Affairs published a statement confirming that ‘regulations in the UK which implement the Nagoya Protocol […] will continue to be operable after the UK leaves the EU’.
Principles of responsible research assessment (DORA)
The University of Salford is committed to responsible research assessment. We are working to make sure we evaluate our researchers fairly, accurately and transparently, including through the responsible use of research metrics across institutional processes and decision-making.
The University has adopted eight core principles. We signed the San Francisco Declaration on Research Assessment (DORA) in May 2019 so the first three are taken from DORA, while the further five have been developed by our Responsible Research Assessment task group to provide more clarity and specificity to our institutional context. These are informed by consultation with our research community and by recommendations of The Metric Tide report which, along with DORA, has shaped the sector's approach to research evaluation. For each principle, a brief explanation and an indication of how it might be applied is provided.
The University is putting in place an action plan to translate these principles into practice. We recognise that these principles challenge some common assumptions and practices which are deeply rooted in global research culture, and that change will not be quick or straightforward. But we are committed to working together – and in partnership with other research institutions, funders and publishers – to play our part in achieving the goal of responsible research assessment.
1. Do not use journal-based metrics, such as Journal Impact Factors, as a surrogate measure of the quality of individual research articles, to assess an individual [researcher's][1] contributions, or in hiring, promotion, or funding decisions. [DORA principle]
- Journal impact factors and rankings are widely misused. They do not accurately measure the quality/impact of any specific article or author published in a journal; they are designed to assess the journal itself (e.g. for an author selecting where to publish).
- If a metric is needed to support an assessment process, the principles below apply: it shouldn’t be used in isolation, and should be appropriate and robust.
- When evaluating a job application, for example, the number and field-weighted citation impact (FWCI) of publications can be taken into account if a) relevant to the field (see principle 6) and b) in conjunction with other indicators e.g. bidding, other outputs, enterprise, leadership, collaborations. To support selection, we will review and aim to improve the quality of evidence asked of applicants.
2. Be explicit about the criteria used to reach hiring, tenure, and promotion decisions, clearly highlighting, especially for early-stage investigators, that the [research] content of a paper is much more important than publication metrics or the identity of the journal in which it was published. [DORA principle]
- Transparency and diversity are at the heart of this principle. If we focus on high-ranking journals we risk disadvantaging early-career researchers, interdisciplinary research, emerging disciplines, and other modes of research dissemination.
- Within all research assessment processes, we will ensure our criteria prioritise qualitative evaluation, assessment panels are trained, and transparent information is provided to applicants.
3. For the purposes of research assessment, consider the value and impact of all research outputs (including datasets and software) in addition to research publications, and consider a broad range of impact measures including qualitative indicators of research impact, such as influence on policy and practice. [DORA principle]
- This builds on principle 2, by emphasising that research assessment should reflect the diversity of research. Our research assessment mechanisms should recognise and reward a wide range of different outputs, activities, and types of impact (i.e. not solely highly cited publications).
- At Salford, we already consider an expansive set of examples and evidence when assessing research through Three-Year Research Plans and the incoming Academic Career Framework.
- We will review other research assessment processes, including recruitment and promotion, and will make sure that this principle is well understood by researchers and those evaluating them.
4. Where quantitative metrics are used, they support qualitative, expert-led evaluation.
- This principle elaborates on DORA principle 2 and refers to all outputs, not only journal papers.
- It is based on the definition of 'humility', one of five basic principles defined in The Metric Tide.
- For example, when an internal peer review panel evaluates the quality of an output, if it considers citation metrics these will be secondary to the expert assessment of the reviewers.
5. We recognise the potential for bias within both qualitative and quantitative research evaluation, so indicators are used in combination wherever possible.
- No indicator tells a full story, so robust evaluation relies on using a variety of indicators.
- Some quantitative indicators are more robust than others (dependent on the quality of the data behind them, and how they are calculated)[2], but all are vulnerable to inappropriate use or misinterpretation. And qualitative assessment can be affected by unconscious or conscious bias.
- For example, in assessing researchers' three-year plans, metrics (e.g. bidding, publications) and qualitative indicators (e.g. collaborations, leadership, enterprise activity) are taken into account.
- We will provide training to assessment panels to ensure robust evaluation processes, and will expect Equality Impact Assessments to be used when designing evaluation models and choosing indicators.
6. We account for variation across disciplines; it will not always be appropriate to apply the same indicators in the same way across the University.
- Context is critical if we are to choose appropriate indicators for the purpose.
- We will choose field-weighted or normalised citation metrics, where citations are relevant. For example, in some scientific fields where journals are the dominant dissemination method and citation rates are high, citation metrics can be a sound measure of research impact. But in other fields, particularly arts and humanities, book publishing and other dissemination are common and citation rates are low, so citations don't effectively measure or differentiate between levels of impact. Note: fields are defined by metrics providers (e.g. Scopus), so if a researcher's work is classified in a field with high citation rates but they work in a specific area where citations tend to be lower, the field-weighted metric will not provide an accurate measure of their relative impact.
- We will provide advice and training to help those assessing research to take context into account.
7. We distinguish between different levels of assessment (e.g. output, individual, journal, institution) and recognise that the indicators used must be appropriate for the level.
- Sometimes indicators are used to measure the wrong thing: for example, a journal ranking (which is a valid metric for assessing the journal itself) being used as a surrogate for the achievements of an individual researcher who has published in that journal.
- We will provide advice and tools to those assessing research on appropriate measures for each level of assessment. The Metrics Toolkit is a useful resource for this.
8. We ensure that open and transparent data collection and analytical processes are used, and that those evaluated may verify data and analysis.
- Transparency is a core principle defined in The Metric Tide and is vital to developing a culture of trust, openness and inclusion. This is important for staff wellbeing and retention, and to ensure that we are accurately and consistently recognising and rewarding achievements.
- We will select verifiable data sources and robust indicators, while recognising their limitations. Citation counts from Scopus/ Scival or Web of Science are often more robust than Google Scholar or ResearchGate because of the quality of the underlying data. However, for fields which are less well represented in Scopus or Web of Science databases, these provide an incomplete picture.
- When data informs decisions about recruitment, promotion or 'significant responsibility for research', we will make researchers aware of the data being used and enable them to check its accuracy.
[1] The DORA principles refer to ‘scientists’, but at the University of Salford we interpret these as applying to all researchers irrespective of discipline, so have amended the language accordingly. Similarly, where the DORA principles refer to journals, these apply similarly to other publication channels (e.g. book publisher/imprint).
[2] The Metrics Toolkit is a useful online resource which explains and outlines the strengths and weaknesses of different research metrics.
Useful external resources
- Bibliomagician blog - The LIS-Bibliometrics network blog
- DORA (San Francisco Declaration on Research Assessment) - Includes principles, signatories and good practice examples
- Leiden Manifesto - 10 principles for research evaluation
- The Metric Tide - Report of the Independent Review of the Role of Metrics in Research Assessment and Management (2015), chaired by James Wilsdon, including principles and recommendations. The Metric Tide Revisited will be a rapid review over a four-month period from mid-May to mid-September 2022.
- Metrics Toolkit - Useful tool for choosing metrics. Provides evidence-based guidance on what different metrics mean, how they are calculated, pros and cons
Research Excellence Framework (REF)
The Research Excellence Framework was made in March 2021 and published results 2022.
The University’s internal REF site (requires network login off-campus or on a mobile device) is where you will find information on all our internal processes and policies.
Below you will find a link to the University’s REF Privacy Notices. This provides information on how and why we collect, use and store data as part of our REF2021 preparations and submission.