Advertisement

VolumeEM: An Interview with Kedar Narayan

Posted by , on 14 December 2024

I had the pleasure of sitting down with recent awardee of the Alan Agar Royal Microscopy Award for Electron Microscopy, Dr. Kedar Narayan. Dr. Narayan is very involved with the grassroots Volume EM (vEM) community, and has used his time at Frederick National Laboratory to not only benefit cancer research, but microscopy technique as a whole around the world with his colleagues. Recently, they were able to host a vEM 101 workshop for a small group of scientists – and I was able to attend and hear from experts and vendors in the vEM world. Enjoy the following interview to get a detailed introduction to vEM and a taste of the future of EM methods!

Daniel: As an introduction, could you give a brief summary of volume EM as a group of methods?

Kedar: Both Transmission Electron Microscopy (TEM) and Scanning Electron Microscopy (SEM) are by default 2D techniques. SEM images of the surfaces of cells for example, may look like they’re three-dimensional images. But really, they’re still just a 2D image of a 3D object. So, what differentiates volume electron microscopy from a traditional SEM is that you get a truly 3D reconstruction of a volume of the biosample. The most intuitive way to think about this is to compare it to imaging the inside and the outside of, say, a loaf of bread.

Let’s say you’ve got cinnamon raisin bread. I want to know where the raisins are, not just on the surface, but also inside the loaf, in that volume. If I had a bread knife, all I’d have to do is slice up the loaf of bread, I’d image each of the slices, and then I’d get a truly volumetric representation of where the features of interest are in this volume.

And that’s one approach that several volume EM techniques employ, essentially ‘section and imaging specimens’ methodology. It’s really a series of related approaches that include all the way from Electron Tomography to Focused Ion Beam (FIB-SEM), Serial Block Face (SBF-SEM), Array Tomography, and even Grid Tape TEM. For a majority of these, the underlying theme is sectioning a sample and imaging it in 2D (thereby getting a stack of 2D images), while in the case of electron tomography, a slab of the specimen is progressively tilted and imaged – either way the stack of images or “tilt series” are computationally reconstructed into a 3D volume, and that’s what makes volume EM.

D: What are some innovations in TEM or SEM that really enabled volume EM to become so prevalent?

K: Electron Microscopy has been around with us for essentially a century, and the signals you acquired used to be entirely analog. You would obtain an exposure on film and have to either print or scan the resulting image. Traditionally you could manually section a resin embedded sample, put it on a grid, perform TEM and image it sequentially, and many people have done it that way!

What has allowed volume electron microscopy to really take off? I would say it would be the automation of the sectioning and the imaging. For example, In FIB-SEM you have a gallium ion beam that automatically mills a very thin section of your resin embedded specimen. You image the sample face that’s exposed and then the beam nudges forward a little bit, shaves off the next layer, you image that slice, and so on and so forth.

Thus, automation and synchronization of the sectioning and imaging, which allows you to just blitz through these volumes, enabled adoption of volume EM techniques globally.

D: Right. And within the past 5-10 years, I’m sure machine learning has also helped with those large data sets that normally would be very, very daunting to approach.

K: You’re absolutely right about that. The automation and synchronization are purely on the image acquisition side. But you know, one of my favorite things to ask is “OK, you’ve got gobs of data. Now what are you going to do with it”, right? In recent years, there has been a raft of machine learning and deep learning-based approaches that allow you to extract your features of interest from these giant gobs of data without completely manual segmentation slice-by-slice.

D: Speaking of ‘Big-Data’ challenges, a 2015 article from PLOS ONE (Stephen et al) predicted that in 2025 the biosciences will surpass astronomy in diversity, complexity, and sheer volume of data. Are there some programs or protocols that are useful in parsing this data to make it more manageable?

K: Instruments are increasingly able to acquire larger and larger volumes of data. For volume electron microscopy, you can generally split this into two very broad areas of research (connectomics and cell biology).

On the one hand, you have connectomics, creating wiring diagrams of neurons in the brain – they need the full volume to complete their work. Successful scientific groups around America and Europe have created these gigantic data sets that look at wiring diagrams in the entire Drosophila brain, or in even small sections of the human brain, the mouse brain, and so on. They have led the way in terms of looking at sort of very large volumes of data and have come up with software solutions that allow you to handle those data.

D: Like Visbrain, WebKnossos, or Amira?

K: Yes, these are some options and there are an increasing number of solutions percolating through the community that allow volume EM scientists to handle those larger kinds of data, even if they’re not within the connectomics community. In the cell biology community, there may not yet be a burning need for these vast quantities of data (such as the recent HO1 1.4 petabyte data set of the human brain tissue sample). But if you’re in the terabyte range, even 100 gigabytes, that starts to get clunky if you’re not careful.

What will bring the community to that next level in terms of handling these large data is the incorporation of next generation file formats and standardized metadata. In the medical field for CT and MRI data, eventually DICOM arose as a universal standard. Our EM community is slowly coalescing towards ome-zarr as a file format and REMBI for metadata, and awareness is beginning to increase. These next generation file formats allow for larger and larger handling of data using very clever pyramidal schemas and other tricks.

My hope is that vendors would come together to provide customers an agreed upon open-source file format. If you want data to be truly FAIR data (findable, accessible, interoperable, reusable) you want it to be a universal format. And that’s something where we’re not there yet, but I’m confident that it will happen soon!

D: Yes, and I’m sure with the variety of methods, having an interoperable data format would be necessary to bridge gap between the variety of methods – which are all ultimately obtaining very similar data. So, going more into the diversity of vEM methods, is there a specific kind of volume EM that you specialize in? What are some of the advantages of your instrumentation at NIH?

K: I really cut my teeth in volume electron microscopy when it was still very much in its infancy. The utilization of a dual-beam SEM (FIB-SEM) allowed a gallium ion beam to slice through a very, very soft insulating material. There were a lot of basic challenges that needed to be sorted out to do things that we start to take for granted now, but that continues to be the bread and butter of our group here at the ATRF. We feel like we have a lot of control over the FIB-SEM too, although it can be a very complex system.

Since FIB-SEM has a gallium beam, you essentially have an electronically controlled milling step as opposed to a mechanical blade going back and forth. Automatically you have a lot more control computationally over how much your gallium beam is milling compared to the mechanical methods. This means that the Z-step size is very well controlled with FIB-SEM, giving great 3D resolution – which is usually a big limitation of these datasets. We can achieve as thin as 4-5 nanometers, granting the highest isotropic image resolution. All that to say, it is a very expensive and finicky tool at times.

Our other main tool in the lab is Array Tomography. Where FIB-SEM has limits in the size of the volume you can work with, Array Tomography allows you to decouple the sectioning and the imaging. You can do a whole bunch of sectioning, put sections on a substrate, and then you perform the imaging on SEM. This allows for significantly larger volumes, albeit with a slightly lower Z-resolution as I mentioned, because of the manual sectioning from microtomy.

D: And then you also retain the data, correct?

K: Exactly! The big difference is that while FIB-SEM is destructive, Array Tomography is not.

So, when we were setting up the lab, it seemed to me like array tomography was the best complementary approach to FIB-SEM. Those are the two SEM approaches we have, and of course we do TEM tomography as well, which is a way of getting very high resolution, even higher than FIB-SEM but for anisotropic data in very small volumes. And in addition, we do fluorescence for CLEM.

D: Speaking of the light microscopy side of your lab, what are those benefits of electron microscopy over fluorescence?

So one, the resolution’s going to be much higher with an electron source. And two (I have seen this personally so many times), volume EM allows for serendipity. I really love that. The biological community in the recent past has been heavily tilted towards hypothesis driven science, and there’s nothing wrong with that! But what volume EM allows you to do is, by sheer dint of the fact that you’re seeing entire volumes in 3D, you’re seeing them at high resolution and you’re seeing the ultrastructure of everything right in that volume… Because of that, it allows you to make chance discoveries that were previously not possible!

If you take fluorescence microscopy, you only see what you’re staining for – if you’re only staining for, let’s say, nuclei with DAPI, or your protein of interest with an antibody, all your other pixels are black. You have to know what you’re testing for, and then you stain for it, and then you image it by fluorescence. Whereas with EM (we have published several examples of this) you can capture these completely unexpected phenomena, which then you can use to create new hypotheses.

D: And then you mentioned that you use CLEM, how beneficial is that for volume EM?

K: It is extremely beneficial. One inherent disadvantage of volume electron microscopy is that in a given specimen if you don’t know exactly where you’re going to image, you could be searching forever. CLEM, at least for us, has been most beneficial in solving the ‘needle in a haystack’ problem. You might have a really rare phenomenon, and you want the 3D ultra-structure of that biological feature, but you don’t want to brute force image through an entire specimen.

With CLEM, you tag your feature of interest fluorescently. Then you can image that by your favorite fluorescence approach and subsequently fix, stain, and resin embed in situ so that nothing moves. Now you have essentially a road map of where you would want to get your high-resolution volume EM approach. This would be considered correlation for relocation.

Correlation for registration is a higher bar, where you want to computationally register every fluorescence signal. Every group of pixels that are fluorescently labelled correlates to a specific feature in the volume EM data set. That’s a little more technically challenging, but a lot of us have done it and it can be very, very powerful as well!

D: You were recently awarded the Royal Microscopy Society Alan Agar Award for Electron Microscopy; our congratulations! What are some of the innovations that have helped the EM community from your work?

K: So, I have to be honest, I did not know that I was going to get nominated! A bunch of people were very nice and supportive, and they nominated me – but I didn’t know about this until I got the award! It was such a pleasant surprise and it it’s such a nice affirmation that folks in your community see what you’ve done as useful.

So, do I know exactly what got my name up there? No, but I have a couple of guesses. I’d say that there are probably three things. One is, I’d like to say that we’ve done some important work using FIB-SEM primarily, and core volume EM, to answer important questions in cancer and cell biology.

The second thing I would say is my role in building the volume EM community. This is something I care a lot about, as the volume EM community is still a nascent community. We’re still building it up, but it’s such a fun place to be. Because people are very supportive, and people are constantly learning from each other. But the thing is that it takes a lot of work behind the scenes to bring that community together and keep it moving in in a positive direction. So perhaps that was something that was a factor and perhaps more most recently.

Third, an area of focus in our group has been the application of AI. More specifically, deep learning tools for the segmentation and visualization of organelles or features involving human data. So perhaps our open-source software empanada could have been a factor as well. Maybe it’s one of one of these three or some combination of these three things, but I’m very grateful either way!

D: If someone would like to get involved in the community, or learn more about these techniques, what are some good resources to check out?

K: Anybody who’s reading this interview should go to volumeem.org and check it out. It’s an international website that serves the entire volume EM community and it’s an extraordinarily good resource. We have put in a lot of work in and are very, very proud of it.

Also, there are some media accounts you can follow. On X, @volumeEM1 and LinkedIn the vEM Community. I would also recommend joining the Slack channel found at volumeem.org, it is called EM of cells, tissues, and organisms (EM of CTO). It’s a fun little space.

And lastly, I would like to give a shout out to data sharing. I think back in the day there was almost a protective proprietary ownership towards data, whereas now it’s very clear that the more you share – the more you can get out of it. I would urge volume EM practitioners to go share their data either at EMPIAR (Electron Microscopy Public Image Archive) or another open image archive. These are very good resources for volume EM researchers, not just to share their data, but to see how many public data sets that are already out there! These can serve their own labs or be used as a springboard for new collaborations.

D: As we’re wrapping up, are there any special thanks that you would like to extend?

K: That is a tough, tough question, because there are so many! I’m really thankful for the group that we have at the National Lab, very hard-working colleagues who do excellent work. I’m thankful for the volume EM community that we have. I would just say that these people who are so dedicated… they know who they are – I know who they are – and their work behind the scenes has been instrumental in creating this community out of nothing!

D: And finally, we love to ask, what is one fun fact about yourself?

K: When I was young, I was very good at chess. So much so that I was seriously considering becoming a pro. Unfortunately, it just fell by the wayside as other interests came up.

D: Yeah, it might be hard to keep up with all the openings when you’re aiming for a PhD or postdoc!

K: Although, chess has undergone such a rebirth in the last 10 or so years, especially with all of these chess engines being used now to help players compete against one another. Social media is also playing an unexpected role in making chess cool, and with the addition of these other ways to play – like where you scramble the pieces and whatnot, it’s an exciting time!

Volume EM 101 Workshop at Frederick National Laboratory
1 Star (No Ratings Yet)

Tags: , , , ,
Categories: Interviews

Leave a Reply

Your email address will not be published. Required fields are marked *

Get involved

Create an account or log in to post your story on FocalPlane.

More posts like this

Interviews

Filter by