false
Catalog
Module 03b. PET Quality Control, Corrections and P ...
PET Quality Control, Corrections and Processing (P ...
PET Quality Control, Corrections and Processing (Presentation)
Back to course
[Please upgrade your browser to play this video content]
Video Transcription
Hello, my name is James Case, and the title of this section is Instrumentation Module 3B, Pet Quality Control, Corrections, and Processing. Please take a moment and review my disclosures for this presentation. The learning objectives of this module is we're going to explain the key concepts of pet, describe considerations for optimizing image reconstruction, identify and correct for common pet artifacts, and calculate important pet quantitative values. So the outline for this talk is first we'll review the principles of pet imaging, then discuss how these pet imaging principles are applied for cardiac acquisition protocols, go through how we reconstruct images, discuss some of the important and evolving advanced processing techniques, review some of our common pet artifacts, talk about the principles of pet myocardial blood flow, and then how do we assess the quality of cardiac blood flow studies. This slide illustrates an important concept within pet. What we have here are three or four different acquisitions and reconstructions from four very different cardiac pet imaging machines. The top row is from an older dedicated cardiac pet system using a 2D acquisition. The next row down is another dedicated pet system, this time acquired using 3D. The next row down is a PET-CT early generation 16 slice machine, and at the bottom is a modern time of flight reconstruction from a PET-CT system with 128 slices. As we can see, all of these pieces of instrumentation can acquire a very high quality cardiac pet study. It's important we understand the limitations and the context in which we can achieve these high quality results. The expectation being that no matter what instrumentation we are using, we should be able to obtain a high quality study. Before we go into discussing PET, we want to review some of the key differences between cardiac SPECT and cardiac PET. SPECT, the way we use SPECT is a single photon is used to create our image. The camera is focused with a collimator, and this collimator is the key limitation to SPECT. Because these gamma rays that are used within SPECT cannot actually be focused with either a lens or a mirror, we have to rely on a collimator. This collimator removes all but one in 10,000 photons, greatly reducing the sensitivity. By far the biggest difference between PET and SPECT from the image quality standpoint is attenuation correction is unavailable for most traditional SPECT system and is not routinely used. For PET, we use two photons from a single decay, and no focusing is required. By identifying the arrival of both of these photons, we can determine along which line of sight the annihilation event had to have occurred. And because of the unique geometry of cardiac PET system, attenuation correction is not only simple but necessary to do. This illustration illuminates this key difference. Within SPECT, if we imagine an emission taking place at this place and then traveling out towards the camera detector, we don't know whether it's this amount of attenuation or this amount of attenuation, just based on where it arrives at the detector. Because of this, within SPECT, the only way that we can perform attenuation correction is to do a complicated nonlinear reconstruction and gradually remove the effects of attenuation. PET, on the other hand, has the unique geometric characteristics that if we have an annihilation event taking place here, the total attenuation along this line of sight is the sum of this distance plus the sum plus this distance. So regardless of where the annihilation event takes place along this line of sight, it always has the same amount of attenuation correction required, making it very simple yet necessary to apply in all circumstances. There are four different types of photon events within PET. The ones that we want to be able to acquire and create our images is something called a true pair. And a true pair is when the detector system acquires an event of a 5.11 being received at the detector, and then the opposite side of that, it's paired. And by locating both of these events, we can determine along which line of sight that photon event occurred. Now, the system can be confounded by other sorts of events. For instance, there are single events. These events are the events that take place by an annihilation event and a 5.11 being received by the detector system without identifying its companion. Another one is scattered, similar to what we see in SPECT. Scatter events is where a 5.11 photon is scattered off electrons in the medium and then travel at a different trajectory, creating a fictitious line of sight. And then finally, a new type of artifact we have in CARDIAC-PET that we don't see within CARDIAC-SPECT is a random. And what randoms are is when two singles events arrive at the detector system within the same timing window, creating a fictitious line of sight. CARDIAC-PET has four different types of acquisitions that take place, three of which are emission acquisitions, and one of which is the transmission image. The three key CARDIAC-PET acquisitions that we acquire is the static image that's used for assessing tracer uptake and myocardial perfusion. The gated study, which is an ECG gated study where counts are read in into specific time frames over the R to R interval for assessing cardiac wall motion. And then finally, the dynamic study, and this is used for measuring the kinetics of the tracer and the uptake and myocardial blood flow. The new study that we also acquire is the transmission study. This is used for attenuation correction. It can be either acquired using a CT scan or a line source for attenuation correction. The reconstructions can take place just like with SPECT, using several different algorithms. And within PET, rarely is filtered back projection used. More commonly, iterative OSEM reconstruction is used. Other types of more advanced techniques that we won't discuss today can also be applied. The iterative algorithms that are used, the properties are, they don't rely on strictly inverting the projector, and what they do is they apply a stepwise approach for fitting our reconstruction solution to our projection data. So by using physics model for how the study is created, we can then find solutions for volumetric images that match that physics of the scanner and the data that was acquired. Some of the most common ones that are used are the maximum likelihood algorithm and the ordered subset expectation maximization algorithm. Both of these are very similar, except the ordered subset expectation maximization or OSEM method is an acceleration technique for the maximum likelihood method. The way these algorithms work is that each one of the iterations, we improve our estimate for what the image looks like. So as we can see here in iteration one, the image is very blurry. And as we continue iterating, we separate the different structures from one another, increasing the resolution of our reconstruction. So by five iterations, we're starting to see the background fade and the ventricle and cavity beginning to separate. By 30 iterations, we're starting to see an increase in the noise, and then by 100 iterations, we're dominated by the noise. So as we iterate to an optimal point, we get the peak image quality, but if we iterate too far, we end up overemphasizing the noise properties of the image and losing some of the fidelity for the actual image. Here's another illustration that demonstrates that point. In the top row, we do one iteration in four subsets, which is effectively the same as four iterations of the maximum likelihood method. What we can see is that same property. We have a very blurred cavity because we haven't iterated enough to separate the structures from one another. Another thing that we see within this image is that there's an artifact laterally and septal within the image because there's more blur in plane than there is between the different planes, creating this artifact. Now, as we get to two iterations and eight subsets, the structure is starting to appear more like what we would expect the heart to look like, and we reach an optimal at four iterations and 16 subsets, which is effectively the same as 64 iterations. So it's going to be the iterations times the number of subsets to get the effective number of iterations of maximum likelihood. We get a good quality image. This is typical, anywhere between 50 and 70 iterations being necessary for getting a high quality study. Now, just like we saw in the previous slide, we go too far with 16 and 16 subsets are effectively 250. Not only does that increase our reconstruction time, we start to overemphasize the noise and the amount gained by adding iterations is lost to the noise being introduced into the image. Just like we do with SPECT imaging, we also apply a post filter. Most commonly used in general PET imaging is going to be a spatial filter, and most common of those would be the Gaussian. And what that does is it blurs out the noise in the image by creating a neighborhood average along inside of a kernel, which is a width of how close the filter assumes the different pixels are related to one another. Frequency-based filters, more common in cardiac applications, use a Fourier transform. So that's transforming the image into Fourier space, which is a representation in images in frequency space. And then removing the higher frequency sharp noise properties and then transforming that back into image space and removing those higher frequency sharp features from the image. One of the nice things about the frequency filters as opposed to the spatial filters is it tends to give better contrast between structures and maintains some of the sharpness of the edges, such as the endocardial and endocardial boundaries. More recently, adaptive filters have been introduced, which introduce known properties of the images, such as edges and long continuous stretches of similar counts, like we would see in the myocardium, to allow the filter to take into account some of the properties of the image. So here's an example of a frequency-based filter. One is a fifth-order 0.33 cutoff frequency filter, and you can see we haven't filtered enough within this image to maintain the image properties. By reducing the cutoff frequency, we exclude more of the high-frequency components and retain more of the lower frequency structure of the heart and get an improvement of image quality. But keep in mind, we cannot restore images if counts were not acquired. There always is going to be a limitation of what filters can do. We should never use filtering as a substitute for acquiring adequate counts. Another risk of changing the filters based on a particular patient is we can get, we can essentially get back what we want with the filter, not necessarily what is there. In addition to the OSCM and maximum likelihood of algorithms, there are a number of advanced reconstruction techniques that are also in use today. One thing these advanced techniques all have in common is they use an iterative algorithm for finding optimal solutions to the reconstruction. One such uses 3D acquisition, and that allows us to take advantage of the fact that cardiac PET does not require collimation. So instead of acquiring only photons running perpendicular to the detectors, we can acquire oblique planes, increasing the sensitivity of the system. We also can take advantage of time of flight, and that technique, which we'll discuss in detail in just a bit, takes advantage of the fact that the speed of light is finite. And if we're looking at the ring of detectors and the annihilation events arriving at the detector at different times, we can take that difference to identify where along the site it most likely would have occurred. And then Bayesian techniques use known properties to favor certain types of solutions. In the transmission imaging, it can be used to favor water-like attenuation properties or penalize any solutions that might include metal, implanted metal devices and so forth. I want to go into a little bit more detail about 2D versus 3D imaging. As was stated very early on, we do not need a SEPTA or a collimator to create an image in cardiac PET, but many acquisitions that are acquired use a system of SEPTA to exclude any events that are not traveling perpendicular to the collimator face. Why would we want to do that? If we do that, we obviously lose a tremendous amount of efficiency for all of the events traveling oblique to the camera face. The reason we would do that is because we gain a lot of improvement in image quality by excluding the scatter mechanically with the use of the SEPTA. In order for a scatter event to be received, a photon would have to be scattered into the collimator and its companion would have to be scattered into exactly the same angle within the same plane to be received as a scatter event. These SEPTA can reduce the impact of scatter mechanically by a factor of 10 or more. Another reason why 2D ends up being a use in many cases is it excludes random events and reduces the number of random events significantly by reducing the overall sensitivity of the system. 3D, on the other hand, allows all of these events in. Though we get a huge increase in almost sevenfold theoretically increase in the sensitivity of the system, we allow all of these unwanted events into the system and we have to have accurate correction techniques for removing them. In addition to this increase in sensitivity, theoretically sevenfold, we also have an added challenge with rubidium is that a small fraction, about 13% of the events within rubidium annihilation contain a second high energy prompt gamma ray. And this additional gamma ray can fool the scatter correction algorithms and overcorrect for the impact of scatter. 3D imaging can allow for a very low dose between 20 and 30 millicuries without sacrificing any diagnostic accuracy. But please be aware some of the older systems do not have the capability of doing 3D imaging either because of the type of detector that's used or the corrections that are built into the system. A bit more detail on this prompt gamma effect and why it's so important to be sure that if you're doing 3D imaging that the prompt gammas are corrected for about 13% of rubidium decays have this contaminating 776 KeV photon. When this photon enters the system, it can fool the scatter correction algorithm to create overcorrection. Above is an example of a smaller patient, a 22 BMI patient, and in this instance the patient has a scatter fraction of about 31% and the prompts is only about 5% of the accounts being received in the image. Now if we look at this larger patient at 32 BMI and a higher injected dose, we've increased the scatter fraction up to about 37% and the prompts have gone up to about 11% and look at the significant problem that we introduced. The top two rows, we can see that artifact high antireceptally that then in the lower two images goes away with prompt gamma correction. So it's very, very important to recognize that prompt gamma correction is an absolute necessity when acquiring in 3D mode. The optimal reconstruction strategies that we use have to take into account all of the relevant physics, attenuation, scatter, prompt gammas, and detector geometry. The reconstruction must not introduce its own artifacts and must be allowed sufficient time to converge and post reconstruction filters should not be used as a substitute for sufficient counts. Time of flight reconstruction has become one of the most interesting new additions to the reconstruction options that we have available. In a traditional reconstruction, we would only know the location of the two photons being observed by the detector. So the reconstruction algorithm would have to assume there's equal probability that that photon was emitted along this line of sight. With time of flight reconstruction, if photon A arrives on time and photon B arrives at a slightly different time, a few picoseconds apart, we can then narrow it down where that photon would have been emitted to a much narrower range. This creates a much better separation of different structures. Looking at the study below, we can see despite the fact there's a very hot dowel right here, there's very good separation between the myocardium, virtually no spillover between this hot dowel and the inferior wall. And because these converge more quickly, there's less reconstruction time, there's better separation of different tissue, and so forth. So time of flight has become a very important new addition to the family of reconstruction algorithms available. Here's some images of the same phantom being acquired from three different scanners. On the left is a 16-slice early generation PET-CT system. In the middle is a 128-slice PET-CT using time of flight. And then the last one on the far right is a digital PET-CT acquired also with time of flight. And I want you to pay attention to these dots right here. This is the smallest one-centimeter feature. As we can see in the PET-CT, it is visible within the phantom. And then it's also visible with the time of flight. It's in a different position here, the 8 o'clock position of the phantom. But as we can see, the one-centimeter feature is clearly resolved and much less noisy than in the other two scanners. Now if we look at the background, the signal to noise with the same amount of activity in it, it increases slightly using time of flight and the 128 acquisition. But look at the big change in sensitivity when we go from digital PET, go from regular traditional PET to digital PET-CT. It's a significant over-factor to increase in overall sensitivity. Looking at that within an image, so this is digital PET with time of flight, 20 millicurie injection, digital 3D time of flight. We've also corrected for diaphragm breathing artifacts, as well as cardiac motion, as we can see in this image. This is the potential of where we can get to with Rubidium imaging. We include all of the physics into our iterative reconstruction, very, very high-quality studies, very well-defined myocardial walls, and very high-resolution, high-quality study. Just reviewing the different camera characteristics and how we achieve these improvements in accuracy. High sensitivity, it's going to improve our ROI detection and accuracy of our studies. It also improves our quantitation in terms of counts and minimizing partial volume effects. It also allows us to acquire with a sharper filter cutoff and higher resolution. Things we can do to improve sensitivity is use 3D scanning, digital scanning using increasing number of rings, allowing more in-plane and oblique planes to be included in the acquisition. High resolution, that obviously will improve our quantitation by reducing cardiovascular volume effects, and also allows us to separate and identify different features, such as defract defect from normal tissue and background objects to the target objects. Things that improve it are 2D scanning. 2D scanning will always improve resolution over 3D scanning because what we are doing is we are not allowing those contaminating scatter and randoms to come in and not having an effect for them in the first place. And also what can help with that is reducing our filter cutoff. Shorter dead time, it prevents system saturation and inaccurate blood pool estimates. We can improve dead time with 2D scanning. We can also use faster electronics, lower activity per second in the infusions and stretch out the bolus for rubidium. We can also use other agents that use less activity. And then finally, time of flight reconstruction, it improves the separation of features and that's available on the newer scanners that use time of flight. So the new type of acquisition that we're going to be acquiring with all of our studies is going to be a transmission scan. In order to identify a high quality transmission scan for use as attenuation, for attenuation correction, a high quality transmission scan will have accurate and well-defined lung boundaries, a scan that has smooth counts in the mediastinal region, and a scan that's registered with the emission data. Line source-based attenuation correction, the way this is done is we acquire the attenuation map by inserting a radioactive line source into the field of view and creating a measurement of the patient-specific attenuation using that radioactive source. By acquiring an image with and without that line source in the field of view, we can create a relative loss of counts transmission cynogram, and then by multiplying that transmission cynogram times the emission counts, we can then create the attenuation-corrected cynogram raw data. And that can be directly used for doing our attenuation correction. CT-based attenuation correction is similar to line source attenuation correction, except instead of being able to directly measure the loss of counts from one side to the other, we're going to start with the actual CT images, the reconstructed images. Now the images within CT are in a unit called a Hounsfield HU, and we have to translate those Hounsfield units into attenuation correction values appropriate for 511 KeV photons. The image also needs to be blurred to match the resolution of the PET scanner. Once we do that, we create our CTACMAP, CT Attenuation Correction Map, and that we can then reproject into the Sinogram, so as if it's a virtual transmission acquisition to create our transmission Sinogram that just like we did with line source attenuation, we can multiply times the emission data to create our final attenuation corrected study. Most artifacts in cardiac PET are a result of patient motion. There are several different types of motion that take place within the study. There's misregistration, which is a change in position of the patient between the transmission and emission data, so as you can see here, the heart is overlaying the lung field, creating a fictitious artifact in the anterior wall. There can be motion exclusive to the CT scan, so as we can see here, we have a breathing artifact where the patient took a breath during the middle of the scan, and the liver appears in two different locations during the study, and then motion that's purely contained within the emission scan, so this is called intra-scan motion. I ask that you take a moment to try to interpret the study. It would be very difficult to read this as anything other than an abnormal study. As we can see in this study, there's a large reversible defect laterally, anteriorly, and septally that reverses well at rest, so the conclusion would have to be that this is a misregistration. The conclusion would have to be that this is an abnormal study, but when we review the overlay between the transmission and emission data, you can see the heart is out in the lung field, and that introduces the significant artifact. When we position it correctly, the study is a completely normal study. It is very important that on every study you review the transmission study alongside the emission study to confirm that the positioning is correct. How important is misregistration? It's estimated that as many as 21% of resting studies have misregistration in them, and also there, as little as one centimeter can introduce a significant artifact. In a study, almost 40% of studies could have a false positive result as a result of misregistration. It may also have an impact on the measurement of blood flow, so the conclusion is that all studies should be routinely inspected for misregistration and corrected whenever possible. So another form of motion is the motion called intrascan motion, and what that is, is when the patient moves during the acquisition of the emission data. If we imagine for a moment that the patient is at a particular place within the PET scanner, and then suddenly we decide to leave the scanner, we would end up with a lower count, but perfectly reconstructible set of data, just with fewer counts. Now, if we follow that thought experiment to a different situation, where the patient is at one position and then moves to a new position, we still have another set of tomographic data superimposed on top of the original position. The effect is that when the patients move in PET and PET-CT, they smear the image in the direction along the direction of travel. So if the patient travels, in this case, in the upward direction, we lose counts along the direction of travel, but as you can see perpendicular to the direction of travel, the counts are still overlaying one another. So what we end up having is 180-degree opposed defects. Things that can cause patients to move during the admission study is falling asleep, the deep respiratory motion of sleep can introduce artifacts, heavy breathing during the stress due to discomfort, coughing, and et cetera. Here's an example of an intra-scan motion artifact, artifactually created using a 12-millimeter shift. You can see in this image we have a loss of image fidelity as well as a fading in the lateral and central walls. Another common source of motion artifacts is motion within the transmission study. And this can happen because of a poor breath hold in PET-CT. So normally what we would have within a dedicated PET system is the transmission study would be a pre-breathing study where the diaphragm is going through its complete respiratory cycle. So the effect is that the transmission image is an average of all the positions through the cycle. That's the same as the situation within the emission data. So usually there's a good match between the transmission data and the emission data and the position of the diaphragm in a dedicated PET. However, in PET-CT, we usually only get one look at the diaphragm. So if the patient in the acquisition were to acquire the diaphragm at one position and the patient were to take a breath and the diaphragm were at a different position, we might capture the diaphragm at two different places. As we can see in the far left upper panel, we can see the liver is elevated at one point of the CT acquisition up here. And then when the patient takes a breath, the diaphragm appears again down below. So we capture the diaphragm at two different places and it produces the significant defect in the anterior wall due to this multiple positions of the heart. The other thing that can happen is a cough. As we can see here, there's a small bounce in the heart, in the mediastinum right here. Go ahead and illustrate that right there. It creates kind of a sawtooth appearance in the sagittal field transmission map. And that takes a bite out of the myocardial counts. So to reduce breathing artifacts within PET-CT, there are several different strategies that have been employed, one of which is a slow pitch pre-breathing. So the patient is slowly passed through the scanner to create the same sort of situation that we would have with a dedicated PET where we're capturing an average diaphragm position. This is a good strategy. However, it's very important a scanner is used that has the capacity of limiting the patient radiation dose. And so if you are going to try this particular approach, do check with your camera manufacturer to make sure that your camera can do this type of acquisition. Another approach that's commonly used is a shallow pre-breathing when the patient during the CT scan is taking in small breaths to limit the motion of the diaphragm. This can be a very challenging technique if the patient does not practice this breathing technique prior to the acquisition. It also may be impossible to do for patients during stress if they're very uncomfortable during the acquisition. Then finally, another technique that has been commonly used is in-expiration breath hold, as opposed to in-inspiration where you would normally breathe in and hold your breath. The patient is asked to lightly breathe out and then hold their breath. A lot of patients will be uncomfortable because they've never held their breath this way before. So again, it's important that you practice with the patient beforehand to make sure that they're comfortable with the in-expiration breath hold. The last artifacts that we can run across in PET-CT is metal artifact. Now, a number of patients are going to have shock coils or implanted metal devices or metal that very efficiently can consume the x-rays, creating star artifacts. So again, it's important that you can see here, these wires along the right side of the heart create a star artifact that contaminates the entire image. In order to correct for this, there are several metal artifact techniques that have been employed that can reduce the impact of these metal artifacts. As you can see in the top row before, there's a hot spot infraceptally, which when metal artifact corrections applied goes away. One of the most exciting changes in cardiac PET is the introduction of absolute blood flow imaging. This technique has significant impacts in detecting true normals, true extent of ischemia and multivessel disease. One of the key differences between a traditional perfusion study and a blood flow study is the acquisition of a second bit of information in the measuring the activity within the blood as a way of normalizing for the myocardial uptake. And the way we do that is we acquire a dynamic acquisition, capturing the transit of the activity through the blood pool and into the myocardium. And then using a mathematical model, we can solve for the thing which we really want is the amount of blood supply to the myocardium. I'd like you to take a moment and try to assess which one of these is the normal study and which one is the abnormal study. As we can see, both of these studies appear to have a relatively uniform uptake of rubidium. There may be some differences in image quality. The one on the left has apparent hotspot in the high lateral segment, which makes the set the wall appear a little lower. The one on the right, there is a lot of activity still in the cavity and a difference in resolution between rest and stress. So if I were to ask you to figure out which one of the two is the abnormal one, it would be a very difficult task because based purely on the image quality, it's extremely difficult to figure this out. When we add myocardial blood flow reserve, it becomes very simple to figure out. The one on the left has a blood flow reserve of 3.3. We would expect in normal patients to have a better than 2.0 blood flow reserve. So this is a normal study with a few image artifacts, but a very healthy myocardial blood flow reserve. And the one on the right is a patient with extensive microvascular disease, coronary calcium score above 3,000 and a high risk for multivessel coronary disease. And this would be our abnormal study. Similarly, if we look at this type of study, we would have a hard time figuring out which one of these is the single vessel disease case and which one is the multivessel disease case. And as we can see, the one on the left has a blood flow reserve of 2.1 indicating that all the regions, except for that region, the right are well perfused and, and with the vasodilator increase. And the one on the right is the one with the extensive multivessel disease and an abnormal flow reserve of 1.59. One of the things about myocardial blood flow imaging that is both exciting and worrisome is that in the case where we are going to add value as a result of doing blood flow imaging, this is going to give us the ability to see more information in the images than what can be assessed visually, which means we can trust the numbers and don't have to necessarily trust our eyes. That is a very exciting and frightening concept. So what's very important when we're doing cardiac blood flow imaging is we need to be very sure of the quality control because if we're going to trust these numbers, that means we're going to be able to overrule in some cases what the visual appearance of the image is. The way we calculate myocardial blood flow is using something called a compartmental model. This compartmental model is a model for how the transport of the activity moves activity from the blood and into the tissue and then retains that activity within the tissue. So there are several different mechanisms taking place. There's the transport efficiency into the tissue, any process for allowing it to wash back in, and then the blood flow rate through the coronary arteries and the vascular dead. One such model is something called the single tissue compartment model, where we imagine that there are really only two places that the activity can be. It can either be in the blood or in the tissue. There's a K1, which is the blood flow that we want to solve for, and then a transport into the tissue and then efficiency of that transport. Then K2 is the efficiency of retention. And with rubidium, it does wash out somewhat into the blood over time. The way the single tissue compartment model works is we take several different measurements in blue circles along as a function of time, then we apply a non-linear fit, solving for the K1, the K2, and the partial volume. And that best model fit then gives us the actual quantitative blood flow values. Another approach is the net retention model. And these models work by assuming that the washout is a small contributor to the overall blood flow measurement. This can only be accomplished using a much shorter acquisition before that K2 really has a chance to change the amount of uptake. So in this particular example, the acquisition is done over a shorter 150-second study as opposed to the full seven-minute study. And what we do is we capture the blood pool at several different time points, and then at the end, we capture our uptake. So the benefits of these acquisitions is it can be done on a shorter acquisition time, it can be done on both framed and list-mode systems, and it only requires the integral of the arteriole, but there's no fitting involved. The problem it runs into is this assumption of K of no washout, and we have to model the partial volume correction for each different imaging system. So just reviewing again, the two key models that are in use today for rubidium is the net retention model and the single-tissue compartment model. The advantages for the net retention, it can be done in a short period of time. List-mode or frame-mode data can be used. It only requires the integral, and it's less sensitive to motion. Its big disadvantages are going to be the K2 has to be kept calm, is set to zero, and we have to model partial volume for each different scanner. Single-tissue compartment model, it solves for three key variables as a fit, as opposed to having to make any kinds of assumptions, and scientifically it works throughout the entire acquisition. The disadvantage is it requires a list-mode study. It does require motion correction. There needs to be taking into account that the later frames have fewer counts. It's also dependent on the shape of the input bolus because it is a model fit, and then we must assume a weighting scheme for how we can deal with these changes in counts. So regardless of which one of those blood flow models we're using, we can get good blood flow measurements using either approach, so long as we go through and do an adequate job of of assessing the quality of the acquisition. The first things that we're going to check is we want to check to make sure the timing of the bolus is good. We want to make sure that we capture the entire bolus, so the inflow of the activity through the blood pool and then its clearance, and then finally make sure that all the boundaries and ROIs that we're using, myocardial boundaries and blood pool ROIs, are correct. There are also some good checks that we can use to make sure our final result makes sense, such as the double product is correlated with resting flow, and then coronary calcium tends to relate to reduced flow. Some things we want to try to avoid in our clear model violations, such as any shunts or bypass graphs can change the supply mechanics. Then finally, we need to make sure that we are following the, we've been trained on how to do it based on the tracer model we're using, the instrumentation, and the vasodilator that's being employed. Now the flow values, what really is a normal study? So here's some differences that we see between what we might find in the literature and what's actually out there in practice. So what we have displayed here is, is the literature values commonly accepted for resting blood flow, where resting blood flow would be about, would, would be around 0.7 ml per gram per minute, and this is similar to the clinical experience of patients with a low sum stress score. However, when we look at the difference between stress, stress values, stress blood flow we might find in the literature, as opposed to what you might find in a normal appearing patient, there's a very significant difference, and the reason why is in literature values, these are normal volunteers. You need to look at the age of the patients that are brought into the study, and a lot of times these are very healthy young people in these model normal volunteer studies. Typical cardiac patients do have disease which caused them to come into the laboratory in the first place. So what we see in this, in the study of 7,000, 7,260 patients is that there's a significant difference between normal volunteers and normal patients, and that's also reflected on our blood flow reserve. So the first quality control thing that we have to look at is, is the position, is the timing of the blood pool bolus. So if we look at this particular study, and remember what we saw on the previous slide, we have a blood flow at stress of 1.97, a resting blood flow of 0.66, and we want that blood flow reserve to be greater than two, and as you can see, this would appear to be a very normal looking blood flow study, but one of the first things that we need to do is we need to be able to assess the blood, whether we capture the entire blood pool bolus, and as we can see in the red curve, which is the activity at stress in the blood, in the blood pool, it reaches a very early peak, and then empties, meaning that we started the scanner after we'd started the infusion, so we didn't actually capture the entire input bolus. So it's very important that the entire bolus is captured like you see in green at rest. So can we interpret a study that has, that has not captured the entire input bolus? The answer is no, regardless of blood flow model, or scanner, or apparent blood flow values. If you see that the blood, that the peak blood flow values happen very near the start time of the camera, or you have a lot of activity in the very first frame, that would indicate that you've underestimated the blood pool and will end up overestimating the blood flow. Now a number of different, a number of different approaches and software are used for identifying the blood pool ROI. For this particular software we're looking at right here, the requirement is that the blood pool ROI, seen as a box right there, needs to be within the left atrium. Now the ideal location is going to depend on the software, but one common theme is that it does need to be positioned within the blood pool. Another, another important feature that we can use for, for identifying whether or not we have a good quality study and good quantitative values is taking advantage of the fact that the double product, the rate pressure product, which is the product of systolic blood pressure times the heart rate is proportional to the resting blood flow, resting blood flow of the myocardium. And this is because if we think of the heart itself, it's, it's simply a pump, a pump that needs energy. And the amount of energy it needs is related to how hard it pumps and how often it pumps. So the systolic blood pressure and the heart rate, not too surprisingly, this was recognized very early on in this publication, 1993, that there was a linear relationship between the double product and myocardial blood flow at rest. Now, one of the things about vasodilator stress is vasodilator stress. The whole purpose of vasodilator stress is to decouple work demand from, from blood flow. And not surprisingly, even in this early study, it was recognized that there is no relationship between diacritamol blood flow and the double product, which should not be a surprise. So fast forward to 2015, this examination of low likelihood patients, about 3000 patients, pretty much just placed on a scatterplot without really much, much filtering for medications, type of patients, etc. We still have this linear relationship between, between blood flow and between blood flow and double product. And this is important to keep in mind that this that when we find studies, which may be diverged from this line, such as this, these guys up here, or fall off of the line significantly, oftentimes the patient is either a very unusual patient, a patient with maybe severe mitral regurgitation, a cabbage patient, etc. Or the patient has not been acquired properly or processed properly. When you see these situations where you have a resting flow, which diverges from this linear relationship, one of the first things you can do is suspect that there is an acquisition of processing error, and that should be closely investigated. It's one of the nice tools that you have in the, in your tool chest for making these assessments. So how do we do this, the resting myocardial blood flow is going to be have this linear relationship of systolic blood pressure times heart rate divided by if we use that early publication 9300, doing that easily in your head, a lot of times we just that should be within about 20% of the resting blood flow. So if we imagine a systolic blood pressure of 120 times a heart rate of 80, that gives us a double product of 9600. And we do it when we divide that by 10,000. So it's easy to do that in our head, we should expect a resting blood flow of point nine six. So here's an example of how this gets why this particular patient has a double product of 12,947. And if we divide that by the 9300, we would predict a resting blood flow of 1.38. And what we see is that the actual resting blood flow is 1.27. And this tells us that we have a reasonable resting blood flow given this double product. In summary, iterative reconstruction is almost exclusively used in PET to create tomographic images. PET perfusion studies are susceptible to motion, and can have artifacts in particular intrascan motion, which is within the emission scan, misregistration between the transmission and the emission studies. There can be shifting between dynamic studies that need to be corrected for when calculating myocardial blood flow. And also we need to be aware of breathing artifacts that can impact our transmission with quality. Quality control errors can exist also in flow studies and are difficult if not impossible to detect visually. And we have to follow the quality control steps in order to ensure the accuracy of the results. And those quality control steps can differ between the different software packages. And it's essential that you get training on your particular scanner and software approach to doing myocardial blood flow. I want to leave you with some of the important references that were used during this lecture today. So please take a moment and copy these down. And with that, I want to thank you for your attention to this video.
Video Summary
The video is titled "Instrumentation Module 3B: PET Quality Control, Corrections, and Processing" by James Case. The video discusses key concepts of PET imaging and provides an overview of the principles and techniques used in cardiac PET imaging. The presenter explains the different types of PET instrumentation and their capabilities in acquiring high-quality cardiac PET studies. The difference between cardiac PET and SPECT imaging is also discussed, highlighting the advantages of PET in terms of attenuation correction and image quality. The presenter touches on various types of PET artifacts such as misregistration, motion artifacts, and metal artifacts, and explains the importance of identifying and correcting these artifacts. The video also addresses the principles of PET myocardial blood flow and the calculation of important quantitative values in cardiac blood flow studies. The presenter emphasizes the importance of optimizing image reconstruction and considers the various advanced processing techniques that can be applied. The video concludes with a discussion on the quality control measures that should be followed to ensure accurate blood flow measurements in PET imaging.
Keywords
Instrumentation Module 3B
PET Quality Control
Corrections
Cardiac PET imaging
Attenuation correction
PET artifacts
PET myocardial blood flow
Image reconstruction
Quality control measures
×
Please select your language
1
English