Using light and sound to reveal rapid brain activity in unprecedented detail

This new imaging approach breaks long-standing speed and resolution barriers in brain imaging technologies and could uncover new insights into neurovascular diseases like stroke, dementia and even acute brain injury.

The research appeared May 17 in the Nature journal Light: Science & Applications.

Imaging the brain is a balancing act. Tools need to be fast enough to capture rapid events, like a neuron firing or blood flowing through a capillary, and they need to show activity at different scales, whether it’s across the entire brain or at the level of a single artery.

“You can achieve these things individually, but it’s very difficult to do them all together,” said Junjie Yao, an assistant professor of biomedical engineering at Duke. “It’s like choosing between having a fast car that is small and uncomfortable to sit in, or a large, spacious car that doesn’t go over 30 miles an hour. For a long time, there wasn’t a way to get everything you wanted at once.”

In their new study, Yao and his team discuss how they’ve solved this long-standing trade-off by developing ultrafast photoacoustic microscopy, or UFF-PAM.

Photoacoustic microscopy uses the properties of light and sound to capture detailed images of organs, tissues and cells throughout the body. The technique uses a laser to send light into a targeted tissue or cell. When the laser hits the cell, it heats up and expands instantaneously, creating an ultrasonic wave that travels back to a sensor.

UFF-PAM relies on a combination of hardware advancements and machine learning algorithms to upgrade the technique. On the hardware side, a polygon scanning system sends more laser bursts to a larger area while a new scanning mechanism allows the laser scanner and ultrasound sensor to operate at the same time. According to Yao, these changes doubled the speed of their device, making UFF-PAM the fastest imaging technology in the photoacoustic community.

Yao and his team then developed a machine learning algorithm that improved the resolution of their images. They trained it to identify vasculature in the brain using over 400 images of mouse brains collected in previous experiments. Although each brain is unique, the algorithm learned how to identify common structures and used this knowledge to fill in previously missing pixels.

“The resulting images looked as detailed as the high-resolution images we would usually get if we went at a much slower speed, and we didn’t need to sacrifice a full field of view,” said Yao.

As a proof of concept, the team used UFF-PAM to visualize how blood vessels in a mouse brain responded to hypoxia, drug-induced hypotension and ischemic stroke. During the hypoxia challenge, UFF-PAM tracked how oxygen moved through the brain and showed that low levels of oxygen caused blood vessels to dilate.

In the second challenge, the team used the drug sodium nitroprusside (SNP), which is commonly used to treat high blood pressure. Previously, researchers thought that SNP causes all the blood vessels in the brain to dilate. But Yao and his team instead showed that only the larger blood vessels open-up, while smaller blood vessels constrict.

“Because we quickly got a high-resolution view of the smaller vessels, we saw that dilation is not actually the universal response to the drug,” said Yao. “We saw that these small vessels couldn’t provide enough oxygen and nutrients to the tissue, which caused damage.”

In the final challenge, the team used UFF-PAM to observe how the brain responds to stroke and begins to recover. The team saw that immediately after a stroke, the blood vessels in the affected area constrict. This causes their neighboring vessels to also constrict in a phenomenon called a spreading depolarization wave. Because of the large field of view and high imaging speed, the team was able to precisely pinpoint the wave’s starting position and track its movement as it propagated throughout the brain.

Looking ahead, the team aims to use UFF-PAM to explore additional brain disease models, like dementia, Alzheimer’s disease or even Long COVID. They also plan to expand the tool’s use outside of the brain to image organs like the heart, liver and placenta. These organs have traditionally been challenging to image because they are always in motion, so imaging tools need to operate at a faster speed.

“There’s a lot that we can do with this technology now that we’ve addressed these long-standing roadblocks,” said Yao. “We’re trying to pick the most challenging projects to work on to maximize the impact of this technology.”

This work was supported by grants from the National Institutes of Health (R01 EB028143, R01 NS111039, RF1 NS115581, R21 EB027304, R21EB027981, R43 CA243822, R43 CA239830, R44 HL138185), the American Heart Association Collaborative Sciences Award (18CSA34080277), and the Chan Zuckerberg Initiative Grant on Deep Tissue Imaging 2020-226178 by the Silicon Valley Community Foundation.

https://www.sciencedaily.com/rss/all.xml