Advertisement

An interview with Andrew Porter, Research Integrity and Training Advisor

Posted by , on 16 June 2025

Andrew Porter is the Research Integrity and Training Advisor at the Cancer Research UK Manchester Institute within the University of Manchester. Andrew worked for many years as a cell biology research scientist before moving into his role in research integrity. I caught up with Andrew over Zoom to learn more about his career path, what research integrity means and how Andrew aims to support researchers in his role at CRUK Manchester Institute and beyond.

Can you tell us about how you first became interested in science, your research career and how you began working in research integrity?

I’ve always liked science. When I was a teenager, my next-door neighbour gave me her copies of ‘New Scientist’ each week after she’d finished reading them. This was a broad introduction to science; I liked articles about space, about biology and many other things. At school I particularly liked biology and chemistry. I still remember my A-level (~high school) teacher Mrs Edwards putting up a cartoon of a cell in one of our first lessons; it was the model that we’d learnt previously, just a circle with a few dots inside. Then, she put up an electron micrograph of a cell, and it was full of stuff! It was a whole new level, and I knew, pretty much in that moment, that I wanted to find out more.

I went on to study biochemistry at the University of Oxford. It was a four-year course with a lab placement working with Caenorhabditis elegans (worms)and I thought that seeing these green, glowing, living things down the microscope was really cool! I was inspired by the idea that it was possible to look in an organism and see cells dividing or growing. I then went on to a 4-year cell biology PhD degree at the Laboratory for Molecular Cell Biology at University College London, and after several rotation projects ended up working with C. elegans again, exploring neurobiology and cell signalling. One of the things I’d been encouraged to do by my PhD supervisor, Stephen Nurrish, was to broaden my expertise. So after my PhD, I came to Manchester to work with Angeliki Malliri, at what was then the Paterson Institute and became the Cancer Research UK Manchester Institute. We were looking at cell signalling pathways in cell adhesion and how cell junctions form.

During this time, I was doing some microscopy to look at cell junctions and every image I took was sort of messy. These proteins were supposed to be in a particular dot, but in my images, the signal was spread along the whole cell membrane. I wondered what I was doing wrong. Why couldn’t I get images like the beautiful ones that I was seeing in papers? Then, I was at a conference on cell polarity and saw some of the people who’d written papers with the beautiful images. When I went to speak to them, they told me that they saw the spread that I was describing, but they were adjusting the thresholds, because they were focussed only on what was happening at particular junctions in the cell. Of course, that kind of adjustment and decision making can be important to communicate our models, but we have to remember when we’re reading papers that authors may be doing this. I spent nine months messing around on microscopes before I understood that it is important to think about the way we present and interpret data. It also got me thinking, what if the broad signal I was seeing was not just background or random noise, but maybe the proteins were doing something else. Now, when I talk to people about methods and about how they write their methods and present their data, one of the things I’m trying to encourage people is that while you might want the beautiful ‘poster’ image, you should give people the background information on how it was created, and ideally give people access to your underlying dataset. If I’d been able to download the raw images from those papers I would have known that my method was working. I could have made the same adjustments as the authors and got back to what they had. That stuck with me.

I worked on various projects through my postdoc, but I never really settled on anything specific. If I’m honest, I was searching for a question that would be interesting enough to take me forward. I wrote various grants, did projects, and enjoyed them all, but never felt like I found something that I could spend 15-20 years working on. I also started to realise that my own mental health wasn’t amazing, and some of the pressures of being in the lab, for example feeling like I needed results, that I needed experiments to work, started to feel quite dangerous for my mental health. I was starting to feel like that if an experiment didn’t work, I hadn’t got a paper, and if I hadn’t got a paper, I wouldn’t get the grant application, and I needed the grant to get a job. It was an anxious time and got me thinking about whether I was genuinely open to my hypothesis being wrong, or my experiment failing, because of the impact that would have on my funding. Then, projecting my mind forward, while I loved the idea of having my own lab and having people in it to teach and support, I felt that bringing in funding for others would be a heavy responsibility.

Then, while I was thinking about my future, COVID hit and obviously I had a few months away from the lab. I wrote up some of my research, but I started to feel like maybe I’d run out of energy for pipetting! When I returned to the lab, my boss very kindly offered me a change in position, so I went back as a lab manager. I enjoyed it and funnily enough, the bits I enjoyed were organising systems; like getting the ordering working again, writing up some protocols and figuring out the freezer plans. Funnily enough, these weren’t my strong points when I was purely a researcher, but when they were my focus, I realised that I liked trying to fix things. After about six months, a new role came up at the Institute, working on research integrity. I was worried that I might regret leaving the lab, but after talking to other people, I realised that it’s not a bad thing if I do have those regrets sometimes, because I did enjoy my research experience. But now I see my role as using what I learned through those experiences to shape how I help people to do their research in a thoughtful manner. I feel like I have some credibility because of the research that I’ve done, and while I’m not an expert in all the research I engage with, there are general principles that I’ve learned. I’ve spent the last four years learning about research management, research integrity, data management, that’s my research project now. I think I have found the thing that I could spend 20 years of my life on – helping researchers who are still enthused about pipetting and looking down microscopes!

I hadn’t thought about the link between research integrity and mental health, but it seems that it is essential to have a good system of research integrity in place to ensure you don’t succumb to the pressures of being a researcher. Do you think these topics are interlinked and how do you put a system in place to support researchers?

In an induction that I run for all new starters, we look at the policies of the university and Cancer Research UK, our funder. I try and turn the policies into case studies and pose questions that present tricky situations for researchers to think through and try and talk about those pressure points that can have an impact on mental health. I use a quote from Richard Feynman, “The first principle is that you must not fool yourself, and you are the easiest person to fool.” I ask the researchers to imagine they’ve done 20 experiments and 15 of them look a bit off, but there are a few that look okay. Since your brain is a pattern seeking tool and a reward seeking tool, you might try and find reasons to discount some of the data, and that’s enough to get to a result. Of course there’s a balance, because there might be genuine reasons behind your decision, but if you’re not in that good place, or you don’t have good structures, or you’ve got somebody else putting pressure on you for a particular outcome, you might not make good choices.

My job title is Research Integrity and Training Advisor, and this has brought me into areas around research culture because I don’t look at research integrity in isolation. Sometimes you hear about research integrity and people are really talking about academic misconduct, which is often a symptom of poor research culture, and those things all impact on people’s mental health. From my personal experience, which I try and share as part of my job, I found myself responding to reviewer comments and I started to think about whether I was making an argument for something I genuinely believed in or was I just trying to win an argument to get a paper published. I think it’s these points of tension where you need other people around you. You need people to ask you good questions. You need people to support you. You need those good structures. There is very much a link to mental health and if you look at the reasons why people leave academia, I would suspect at least some of those include poor mental health.

We’ve talked a lot about research integrity already, but could you share a definition?

I tend to go to The Concordat to Support Research Integrity UK Research Integrity Concordat, which UK universities and researchers are signed up to (and has just been updated). I think it really helps explain the topic and I like it because it starts with a value-oriented perspective.

To me, research integrity is about being honest in how you report things and in how you interact with other people. It’s about being accountable. It’s about doing things to a high standard and being transparent and open in your communication. So, it’s not just sharing your data at the end of an experiment but ensuring that people can understand the data and they can use it. This comes together in making your research Findable, Accessible, Interoperable and Reusable (FAIR). Another principle of research integrity is care and respect, and I think this is broadly applicable. It is care and respect for your colleagues, the people you’re immediately working with, so building that positive research environment. It’s care and respect for the money that you’re using, which is generally public money or charitable funding. It’s care and respect for patient samples that you might use in your research. It’s care and respect for the environment. And, it’s about care and respect for the wider scientific community. One example of where the concept of care and respect can apply is around using generative AI, asking things like “can you reasonably use those tools if they are drawing on improper uses of other people’s materials without acknowledging them?”

What does your day-to-day job look like?

I sit in the operations team in a cancer research institute with about 250 people. So, in addition to my work on research integrity, my day-to-day job includes other tasks that help the institute to run. I help with some of our communications, our social media, and our new website. I do tours of the building when visitors come, I contribute to the operations side, and I’ve helped run Christmas party quizzes! But actually, I do think that these tasks have an impact on research integrity because it’s part of building a supportive and open environment. The communications side is about promoting people’s work, and having an article written about them, and us sharing that, is a way of saying that we value them. So, even those activities I try and tie in to establishing a supportive environment. I have a self-written statement of my role that says my job is about supporting well-trained and reflective researchers in performing trustworthy and knowledge advancing research with integrity.

Of course, the training aspect of my job is really important. I deliver an induction with all new starters, and I run training sessions with PhD students at different stages of their career – an introduction to scientific literature at the beginning of the PhD and a thesis skills workshop at the end. I’m available for one-on-one conversations where people can ask me about training, resources or to look at their data with them.

Then, one of pillars of my work is a pre-submission review. All manuscripts from the institute are sent to me before they go to journals. I have tried to build a pathway where researchers incorporate this into their normal process, in a similar way to them sending their manuscript to the co-authors and their colleagues. When I look at those drafts, I’m looking for structural issues that can sometimes trip people up. For example, have they put their data management details in? Or, if they have used human samples, have they put the right ethics approvals in? I focus on the details that even if you’ve looked at your own manuscript 1000 times, you might miss, trying to be an extra pair of eyes. Those issues are usually easy to fix.

I check the acknowledgements and authorship, asking is credit being given. I look at the data itself, checking for accessibility, consistency and readability. I send a report back to the researchers and offer my suggestions based on good practice. I think some of the problems come back to the fact that if you report your research as you did it, it ends up looking very different to how the literature tends to look. I think that some of this is a historical link back to short articles from small communities, where people all knew each other (and could informally share methods and extra details), but we’re in a very different space now. A lot of what I’m doing is nudging people into a slightly broader way of sharing. I’m particularly keen on things like research resource identifiers that are machine readable, human readable.  We use the iThenticate software to do a comparison check, because sometimes people have used content from other papers, often their own, which should be referenced. We’ve also just got a trial with a product called Imagetwin that looks for duplications. Since we’re using this software internally, the intention is to use these tools to pick up errors rather than assuming that the researchers have done anything fraudulent. These errors are often the category I errors that somebody like Elizabeth Bik might pick up, for example putting the same panel in twice by mistake.

I have an example of that happening from my own experience, which fortunately got caught by a peer reviewer. I had some images of metaphase spreads, and I pulled together a panel with three control cells and two knockdown cells. I took one picture of the knockdown cells from an old PowerPoint presentation and one of them I pulled in from the original data. The reviewer pointed out that the two were the same picture, but the zoom level had changed, so actually it would have moved into a category II, error because there was a change in how they were presented. I’m eternally grateful to that reviewer for spotting the problem! I use that story in my training now, because it’s an example of the benefit of the error being caught before the paper was published. To address the error, as well as explaining what had happened, I put in completely fresh data for all these images, and we made the whole of the peer review process open and available for readers to see what happened. This meant that there was no ambiguity, and this is what we aim to do with the pre-submission review. Of course, although the role is designed to be supportive, there’s always the possibility you might have somebody within a system who is subversive. If we detect something that we thought went beyond a genuine mistake, since we are part of the University of Manchester, we would raise that as a misconduct concern with the University’s  governance office and they would investigate. I think that having that distance – i.e. not being the one responsible for a formal investigation – helps, but even after four years, I’m still trying to find the best way to explain the pre-submission review process so that people don’t feel like they’re being policed.

Is there anything that you would like to see publishers doing to support research integrity?

I think, to be fair, that many publishers are working on this. Lots of publishers are bringing in similar checks and tools to those I’m looking at. But if I was to just pick one thing, it would be to provide more advice to peer reviewers. When you look at peer review overall, it’s not consistent and I think this might be down to peer reviewers being unclear on what they should be delivering. For example, I’ve had peer review comments of just two paragraphs, and I’ve had peer review comments that were pages and pages. The reviewer who picked up the issue with my images also found spelling errors and inconsistencies, as well as including feedback on the experiments, so it was clear that they had read the manuscript in a very different way to others, maybe more like I would read a paper for a pre-submission review. Ideally, the editor will provide guidance to the peer reviewers. I think that peer reviewer should be asked to explain their expertise to help the editor assign roles. Then when submitting their review, they should highlight areas that they have reviewed, as well as those that they didn’t evaluate because they were outside their knowledge.

Cover of Journal of Cell Science featuring an AiryScan super-resolution image of a mitotic MDCKII cell stained for α-tubulin (blue), which marks microtubules, and DNA (Hoechst 33342, green). The growing ends of astral microtubules, which help orient the mitotic spindle, are stained for EB1 family proteins (magenta). For the cell to divide in the correct orientation, Dlg1 (yellow) must be localised at the cell cortex through interaction with CASK, as described in the article by Porter et al.

What are your top tips for ensuring that data, especially microscopy data, are Findable, Accessible, Interoperable and Reusable (FAIR)?

So in microscopy, there has been less of a history of data sharing in comparison to -omics data, for example. There are repositories like Zenodo, or there are specific image databases such as the BioImage Archive hosted by EMBL-EMBI, the Image Data Resource (IDR) and the Electron Microscopy Public Image Archive (EMPIAR), where you can deposit entire image data sets (and many journals have recommended lists of repositories, too). I think that the variety of instruments and the different modalities makes sharing image data more complicated than some other data types, and this also means that it is essential to capture and share the metadata as well. QUAREP-LiMi has some great guidance on the topic, so researchers don’t have to start from scratch. Of course, researchers also need to remember to share the workflow of any image analysis that they have performed. It’s important to think of these details as early as possible and this is why I’m a big advocate for data management plans, even as an early-career researcher. There are online tools for data management planning, but it’s important not to see the plan as just 500 words that you’ve got to fill on your grant application. There’s lots of guidance on FAIR sharing, but it is a hard problem, especially with image datasets getting bigger and bigger. However, even an imperfect step forward would be good, so depositing your raw data and metadata when publishing is a great start. Having an electronic lab notebook (ELN) is helpful for ensuring that you are collecting all the data you need. You could even create a form confirming that all the required settings have been captured from a device and add a link to the data. Implementing an ELN across the Institute is one of the projects I’m currently working on. ELNs can support data management at the individual level of; planning, thinking about your experiments, thinking where the data is going to go, and they are key to data traceability.

I also advise researchers to get plugged into whatever resources are available at an institutional level, or with the microscopy facility you use.  Speaking to experts early on in your planning is key, especially as you may end up generating very large amounts of data that need specialist processing and storage. 

Overall, it’s important to remember that your data can be very valuable, both for your research and beyond, whether as training data for creating better models that use artificial intelligence or because it is a precious patient sample that cannot be replaced – you might be the only person to ever image that sample. Of course, especially in the latter case, the data and metadata might need to be held securely, but you can incorporate a data request system in your data management plan so that it is accessible to other researchers under the correct safeguards.

Do you have any suggestions for best practices around using generative AI?

I really swing from optimist to pessimist regarding this technology. I’d like to think that we could use this well, because I think the models that integrate across different types of information seem very exciting. I’m very interested, at least in principle, as to whether they could be used for epigenetic modelling. You know, could you learn the language of the genome, to discover information that is hidden from us or hard to interrogate? Could you learn the language of cells from images?

But the most frequent place that I have seen people using generative AI is for writing, especially in trying to extract data from very large amounts of content. Whether generative AI is the right tool for that when it is currently still prone to hallucinations and biases, and whether it really is summarising the data, is still an open question. The Australian Government did a test with some civil servants where they gave them document summaries and asked to rate them, with AI summaries ranking lower, and civil servants often guessing which ones were generated by real people. In some cases, there were documents that had summaries at the end of the document, but the AI just ignored the summary because the weight was with the rest of the 50 pages of text. So, it wasn’t able to intelligently read it.

I think there are lots of open questions, and individual institution guidance is changing, along with editorial, funder and government policy.  I think being reflective and thinking about what you’re doing is important, so my tips are to keep track of what you’re doing; use it like a scientist; record what the model was, what went in, what went out, what changes it made, and what was the output. And use it responsibly, think about whether the models have been generated in ethical ways. I’d really like to see big university-level organisations build their own models where they’ve trained the model on good quality data, where permission to use copyrighted content has been obtained, and people have been paid for the content that’s being used. I’d like to see generative AI

 move more into the academic space, because at the moment, it’s very driven by companies whose motivations are not necessarily aligned with the motivations that scientists would want to be aligned with. Of course, there are also publishing guidelines on the use of AI and how to declare it. If you use it like a spell check or word check, that’s probably fine, but as soon as the model is editing text or touching data, then you need to declare it in your methods. You need to say what you did, and you need to do that as clearly as you would for any other tool or workflow. Those are my thoughts today, but if you ask me on a different day, I might be more pessimistic! For example, there are the paper mill papers and fake images, which we haven’t discussed.

Any final thoughts?

I think it’s been important for me to bring my own perspective and experience to my role in research integrity. I’m not someone who has been investigating misconduct in research for 20+ years, but I really believe that a supportive environment with good structures and a strong community build good research practice and culture. I hope that I am able to contribute to this. I try to make myself available, even if it’s just for a cup of tea and a chat for half an hour, as I think that interacting on a personal level is important. Our little microcosm of the institute works really well for those interactions. I’m grateful for the opportunity to develop this role within the institute and at the same time, be able to plug into the university groups such as the Office for Open Research and the research culture working group. I hope that doing these small things will develop into something big over time.

1 Star (No Ratings Yet)

Tags: , , , , , , ,
Categories: Interviews

Leave a Reply

Your email address will not be published. Required fields are marked *

Get involved

Create an account or log in to post your story on FocalPlane.

More posts like this

Interviews

Filter by