Engagement and Impact- How the UK and Australian approaches differ

Oct 24, 2017 | featured articles

Engagement and Impact- How the UK and Australian approaches differ

In December 2015, as part of its National Innovation and Science Agenda (NISA), the Australian government announced the development of a national engagement and impact (EI) assessment. This will examine how universities are translating their research into economic, social and other benefits, in order to incentivise greater collaboration between universities, industry and other end-users of research. The first full, national assessment will run alongside the ongoing Excellence in Research for Australia assessment (ERA) in 2018.

In September 2017, the Higher Education Funding Council for England (HEFCE) released the first decisions regarding REF2021, and before the end of the year the Australian Research Council (ARC) releases guidance for the roll out of its first Impact and Engagement Assessment.

So, this is a timely moment to look at how the two approaches differ and what they might learn from each other. There’s also a surprising new dimension of relevance between the two countries, after the UK’s Universities and Science Minister announced on 12 October this year that he wanted to see a knowledge exchange framework (or KEF) to assess engagement, particularly between universities and businesses.

The UK’s REF assesses outputs, impact, and environment in a single process approximately every 6 years, with impact being included for the first time in the last round in 2014. In Australia research outputs and a number of other indicators related to research are assessed approximately every three years. For the first time, an assessment of impact and engagement will be introduced to the Australian process in 2018, with data going back over the previous six years.

The engagement part of the Australian process is expected primarily to involve quantitative information, supplemented by qualitative information. The impact element is expected primarily to involve qualitative information, in the form of case studies, that may be supplemented by quantitative information.

Making engagement a central part of the assessment process is a conceptual difference from the UK’s REF, which is neutral about how impact happens. According to the guidance impact that was achieved with great effort and brilliant engagement wouldn’t be graded any more highly than impact that happened serendipitously or via passive means, such as through journal articles.

On the one hand, the Australian engagement assessment looks rather like the UK’s annual Higher education – business and community interaction survey (HE-BCI) which is expected to form at least part of any future KEF. In Australia and in the UK these processes to measure engagement collect quantitative information on patent and patent citation data, research outputs analyses (co-authored etc), research income analyses (funding from end-users), and co-supervision of research students. HE-BCI also requires the provision of information on other activities intended to have direct societal benefits, such as continuing professional development and continuing education courses, lectures, exhibitions, and other cultural activities. This information is optional in the Australian engagement assessment.

However, when looking at the requirements for impact case studies in the EI, the importance of engagement to the Australian process becomes even more obvious. For the Australian impact pilot, the guidance declared that ‘the assessment of impact will focus on the institution’s approach to impact, that is, the mechanisms used by institutions to promote or enable research impact’. Although the UK’s 2014 REF did collect evidence on this process, it accounted for less than 4% of the overall score, with 16% allocated to the ‘reach and significance’ of the impact. In 2021 ‘reach and significance’ will account for 25% of the score.

The implications of linking the ‘institutional mechanisms’ of impact to a specific case study are debatable. One of the findings of REF2014 was that narrative accounts of institutional activity around impact didn’t necessarily connect well with the best examples of impact. Often these happen independently of institutional structures to support engagement and impact. So, what might be the effect of bringing them together? Perhaps it might lead to less strong, but institutionally relevant case studies being submitted, or it might lead to tortuous explanations of how a specific impact was really connected to a ‘mechanism’.

It will be interesting to see how this plays out in the forthcoming report on the pilot from the Australian Research Council – will there be high marks for effort (engagement) as well as achievement (impact)?

To join us for discussion of these issues sign up on Eventbrite here –  ‘ERA vs. REF – How UK and Australian impact assessment differs

 

 

12 + 5 =