November 3, 2008
Listen to Audio (31 min.)
Please note: These files can be quite large. Allow some time for them to download.
Table of Contents
Note: This CME/CE activity expired on Nov. 3, 2009. For a list of currently available activities, click here.
|David Wohl, M.D.|
There were a number of different studies presented at this combined conference, and I think that it's always a challenge to try to sift through the data and bring back those lessons that we should learn and apply to our clinical practice. For most of us, our progress is made one step at a time, and the paving stones for our advancing our understanding of HIV therapeutics really comes in the form of clinical trials.
There were a variety of clinical trials presented, including a few head-to-head trials. It's interesting that this is playing out at the same time that the World Series is playing out, not to mention campaigns for public office, and the idea is the same: We're attempting to compare -- ideally on a level playing field -- two or more contenders for our loyalty. These head-to-head trials are trying to tell us what they think may be the best way for us to treat HIV.
In HIV medicine, and in medicine in general, cutting-edge physicians have no choice but to pray at the altar of the randomized, controlled trial. We depend upon randomization to level that playing field so that we can look at these different therapies closely and understand their advantages and disadvantages. It is these kinds of studies that caught my attention at the 2008 ICAAC/IDSA meeting and the ones I will be discussing.
The trials generally fell into one of three categories, and they are sort of hackneyed, but they are: something old, something new and something basically done over.
I'll talk first about the "something new" category of randomized clinical trials. Really, one trial made the most impression, and that was the STARTMRK study.1 This is a study of raltegravir [MK-0518, Isentress]. Raltegravir is a new drug in the integrase inhibitor drug class.
Over the past year, we have become increasingly comfortable mixing this drug together with others in order to create regimens for people who are heavily treatment experienced. Some of us clearly have been seduced into using this drug as part of a second-line regimen. It's ritonavir [RTV, Norvir]-free and has a minimal pill burden, so it's easy to see why it could be a handy drug in the circumstance.
But there really have been few data to bolster those of us who are itching to use the drug as a first-line therapy. Marty Markowitz has presented data from a relatively small, dose-ranging study in which raltegravir was compared to efavirenz [EFV, Sustiva, Stocrin].2 These results were reassuring in that there were no glaring differences between these drugs seen out to even 96 weeks.
At this conference, what we saw was Jeff Lennox from Atlanta present the first large, well-powered study of raltegravir for the initial therapy of HIV. The STARTMRK trial was double blind and it compared raltegravir twice daily to efavirenz in treatment-naive patients who were also receiving the fixed-dose combination of tenofovir/emtricitabine [TDF/FTC, Truvada].1
Like most of its ilk, this trial was a non-inferiority study. The outer bounds of the confidence interval for non-inferiority was set to be 12%. Thus, if your 95% confidence interval exceeded that, you would then not be able to say you were not inferior to the comparator. The primary endpoint, of course, was a viral load of less than 50 copies/mL at week 48, and non-completers were considered failures.
About 280 people were included in each study arm. This study population, like that of most studies that we're doing in the United States, was largely male and largely non-white. About 60% of the individuals were African American.
More patients in the raltegravir arm made it to 48 weeks than did those in the efavirenz arm. This is because there were more discontinuations among people who were on efavirenz, for a variety of reasons, but often also toxicity.
At week 48, 86% of the patients on raltegravir, compared to 82% of the patients on efavirenz, had a viral load of less than 50 copies/mL. So, pretty similar results. What is notable is that not only did raltegravir go toe to toe with efavirenz, but it maybe had a little bit more on top than efavirenz, though not statistically different. This clearly met the condition for non-inferiority. Thus, it could be said that raltegravir was not inferior to efavirenz.
CD4+ cell count gains were seen in both arms, with a greater increase seen with raltegravir, interestingly enough.
When you look at the incidence of pure virologic failure that emerged in this study -- not just how many people made it to a viral load of less than 50 copies/mL because their drug was working and they were still on their drug and tolerating it -- there were 39 with efavirenz, versus 27 with raltegravir. Only a fraction of the patients, unfortunately, had sufficient virus to be tested for resistance. Those who were tested for resistance, showed what we'd expect, that there was NNRTI [non-nucleoside reverse transcriptase inhibitor] resistance in the efavirenz arm and some integrase mutations in those who had received raltegravir.
Virologic failure, just to be clear, in this study was defined as having either a viral load of greater than 50 copies/mL at the time of stopping treatment or an HIV RNA greater than 50 copies/mL at week 24. It also included virologic rebound -- that's if your virus rebounded to more than 50 copies/mL on two occasions at least a week apart after you initially had a virologic response.
When you look at adverse events, overall there were more adverse events with efavirenz than raltegravir, with CNS [central nervous system] toxicity being a big issue for efavirenz, which is par for the course with that drug.
Raltegravir was more lipid friendly, except in terms of HDL [high-density lipoprotein] levels, where efavirenz produced better responses. So that's, again, something we've seen, that with NNRTIs, HDL levels go up. Maybe with boosted PIs [protease inhibitors] we see the same thing. Raltegravir doesn't seem to lead to as much of an increase in HDL levels, but it was much more lipid neutral when it came to the other lipid subfractions.
For me, this study provides much needed ammunition to use this drug up front, for those who are inclined to do so. I also think that there are more and more reasons to think about using raltegravir earlier in therapy. Most boosted PIs don't perform as well as efavirenz, when looking at virologic efficacy in these types of clinical trials. And again, raltegravir at least does as well as efavirenz, and this really puts it on the treatment map.
Raltegravir is still a twice-a-day drug, and it has not yet been co-formulated with anything else. So it still, I think, has an uphill climb in terms of competing with the fixed-dose combination of efavirenz/tenofovir/emtricitabine [EFV/TDF/FTC, Atripla]. But for some people, I think raltegravir is an option, and these data provide backbone for such use.
Now, on to the "something old" category. There are a number of different studies that we can talk about. Remember, in HIV, something is considered old if it happened more than six months ago. So there were updated 96-week data from three head-to-head trials that are worth mentioning. These included data from: ARTEMIS, a trial that compared once-a-day ritonavir-boosted darunavir [TMC114, Prezista] with lopinavir/ritonavir [LPV/r, Kaletra];3 the CASTLE study, which compared ritonavir-boosted atazanavir [ATV, Reyataz] versus, again, lopinavir/ritonavir;4 and the HEAT trial of tenofovir/emtricitabine versus abacavir/lamivudine [ABC/3TC, Epzicom, Kivexa].5
The extended data from ARTEMIS are of interest.3 This is a study of almost 700 treatment-naive patients. It compared 800 mg of darunavir boosted with 100 mg of ritonavir against standard-issue lopinavir/ritonavir. In this study, remember, lopinavir/ritonavir could be given either once a day or twice a day. As I'll mention in a second, most people started out with the older formulation of lopinavir/ritonavir, and then ultimately switched.
The 48-week data have recently been published in the journal AIDS.6 There were excellent responses with both drugs when co-administered with tenofovir/emtricitabine. Ritonavir-boosted darunavir led to a greater proportion of patients getting to a viral load of less than 50 copies/mL at week 48, but statistically, darunavir was considered to be non-inferior but not superior to lopinavir/ritonavir. Thus, they had very similar responses.
What happened at 96 weeks? What we see is that there was a fairly persistent and consistent rate of virologic suppression with ritonavir-boosted darunavir.3 So there really wasn't much change from week 48 to week 96. On the other hand, when you look at lopinavir/ritonavir, there was a dip in the proportion of people with a viral load of less than 50 copies/mL after week 48, to the degree that ritonavir-boosted darunavir achieves superiority to lopinavir/ritonavir at 96 weeks.
The difference between the two, again, seems to be driven by a drop-off in the lopinavir/ritonavir arm, and this is driven largely by a greater intolerance to lopinavir/ritonavir. More patients discontinued this drug for a variety of reasons, including those not related to virologic failure.
In fact, most of the people who discontinued the drug did so not for reasons of virologic failure. Although, it should be mentioned that there were more pure virologic failures seen with lopinavir/ritonavir (17%) versus ritonavir-boosted darunavir (12%).
Even when you compare ritonavir-boosted darunavir to twice-a-day lopinavir/ritonavir -- because remember that patients could be on either the once-a-day or the twice-a-day formulation, and there could be pharmacokinetic problems with the once-a-day formulation -- there still was evidence of a significantly lower rate of virologic suppression with the lopinavir/ritonavir.
There were some modest differences in lipids, mostly in triglycerides, that favored darunavir. Again, the lopinavir/ritonavir formulation may or may not have been an issue. Most people started out with the older formulation, but by the end of the study, 86% had already switched to the new formulation of the drug. We'll talk about the implications of this study after also discussing the CASTLE results.
This is the data out of the CASTLE study,4 which had already been presented at 48 weeks.7 This was a study of lopinavir/ritonavir versus ritonavir-boosted atazanavir. At 48 weeks, these drugs -- when combined with tenofovir/emtricitabine -- looked remarkably similar. This was a large study, with 800 treatment-naive patients. Not surprisingly, there was a lipid advantage going to ritonavir-boosted atazanavir.
At 96 weeks, again, as we saw in ARTEMIS,3 there's a tapering of the efficacy of lopinavir/ritonavir, whereas atazanavir maintained the efficacy we had seen at 48 weeks.
As in ARTEMIS, there's a variety of non-virologic causes of treatment discontinuation. Certainly, GI [gastrointestinal] adverse events were more common with lopinavir/ritonavir, and there were greater increases in triglyceride levels. But there were other things that were going on at the same time that were non-specific.
Both the ARTEMIS3 and CASTLE4 studies reveal a limitation, I think, of lopinavir/ritonavir, when it is compared to PIs that are boosted with much less ritonavir. I think what's driving a lot of the inability to continue on lopinavir/ritonavir is the ritonavir. In both studies, we see more patients abandon lopinavir/ritonavir, and usually not for virologic reasons. The GI toxicity provides an explanation, but it's only part of the picture. There's more going on there, and I think it's just not being able to continue to take therapy.
For some reason, in the CASTLE study,4 for example, four times more people on lopinavir/ritonavir withdrew consent than did those on atazanavir. So there is something going on there.
These results probably mean that for patients who can stick with lopinavir/ritonavir, they'll probably do fine.
A caveat, of course, is that ARTEMIS3 may argue that there was a bit more virologic failure among those receiving lopinavir/ritonavir compared to those receiving ritonavir-boosted darunavir, and that this was just statistically significant. It's a P value of .044, but it's there.
So that could add a little bit of a chink in the armor of saying that these two antiretrovirals are going to be virologically the same if they are tolerated.
But generally, it does seem that for most patients they'll do virologically well if they can tolerate the drug. But there are problems in terms of the longer-term ability to stay on lopinavir/ritonavir compared to some of these other drugs. Clearly, most people do stay on it. So I think you have to balance this. But, prior to exposure, we can't predict who will do well with a drug, and atazanavir is already giving lopinavir/ritonavir a run for its money, while darunavir is poised to. So I think these studies will be further ammunition that these competitors to lopinavir/ritonavir will use to indicate that they could provide the same type of virologic efficacy and better tolerability.
So what does this all mean? We have two studies here, CASTLE and ARTEMIS, that seem to indicate to us that a drug we've had around for a long time, lopinavir/ritonavir, may not work as well as we'd like over the long haul.
There are some important things to remember. One is that both CASTLE and ARTEMIS are pharmaceutical company-sponsored studies. Hopefully, that doesn't make a big difference, but we know -- especially those of us who are cynical -- that it has some influence. This is not a non-denominational ACTG trial, for instance. I think these results have to be viewed through that prism.
Another thing we have to appreciate is that these were secondary endpoints. The primary endpoint was at 48 weeks. This is a secondary endpoint. Does that mean it's meaningless? No. I think it is important and I think what we're seeing consistently in both of these studies is that there is a challenge for many patients to remain on therapies that are more difficult to take. I do think that we're seeing lopinavir/ritonavir become relatively more difficult than other drugs -- whether it be an efavirenz-based regimen, atazanavir once a day or darunavir once a day -- that are taken with less ritonavir. I think that these are important data that we can incorporate into our clinical practice when we look at the patients who are coming in.
Clearly, already, we're voting with our prescription pads. Atazanavir sales are up, because we are prescribing that drug. What will happen with ritonavir-boosted darunavir? Well, we'll see now that we have a formulation that we can actually use to craft an 800/100 once-a-day regimen with that drug.
The bottom line is that we're going to have to take these results into consideration. Patients will generally do well on lopinavir/ritonavir, although it looks like over the long haul more of them will have trouble with the drug and have to switch to an alternative. That may make some of us feel like maybe we should use an alternative to begin with. Others, however, will feel more comfortable with this coformulated product, because it is a known agent with many years of experience. We might feel more comfortable taking our chances. I think that's going to be a very individual decision based upon a health care provider's individual clinical practice.
I'm going to move on now to the HEAT trial,8 for which the 96-week data was presented at this conference. This is a study that has gotten a lot of press in the past. It compared abacavir/lamivudine to tenofovir/emtricitabine in patients who were already taking lopinavir/ritonavir once a day. I really don't have too much to stay about this study, just to note that there were 96-week data.5
I think the good news for the maker of abacavir [ABC, Ziagen], with regard to this study, was that there was no bad news. As was seen previously, both study arms have nearly identical efficacy rates. In the poster, what the authors have tried to do is reassure those who have what I call abacavir hyperanxiety syndrome. These are people who are so freaked out over recent study results regarding abacavir that they don't know what to do.
That's brought on by recent results from the ACTG 5202 study9 that found that there was a lower rate of virologic efficacy for abacavir/lamivudine versus tenofovir/emtricitabine in patients who had a screening viral load of greater than 100,000 copies/mL, not to mention the whole brouhaha regarding D:A:D10 and SMART11 and abacavir and myocardial infarction risk.
What we get in this particular HEAT presentation at 96 weeks is an examination of the 48 to 49 people in each study arm with virologic failure.5 These were fractured into patients who had a high viral load versus low viral load at baseline. There were slightly more people on tenofovir/emtricitabine who failed with a baseline viral load of less than 100,000 copies/mL -- meaning, of course, that there were more people on abacavir/lamivudine proportionately who failed with a higher baseline viral load. These are relatively small numbers. I don't know what it means, and I'm not even sure why it was presented.
In addition, there were slightly more mutations at the 184 locus in the tenofovir/emtricitabine-containing arm, 17, versus 11 in the abacavir/lamivudine arm. But again, since this is a study of over 680 people, I don't know how much that really means to me.
Just as a footnote: There was another study that's related called the ARIES study.12 This is a single-arm study of 515 people starting HIV therapy who were screened for HLA*B-5701 and, once found negative, were given abacavir/lamivudine plus ritonavir-boosted atazanavir. The study calls for people to be randomized after 36 weeks to a simplification to unboosted atazanavir. What's presented here is just the 36-week data.
What happened was that, before the simplification took place, 70% of the patients already had a viral load of less than 50 copies/mL by intent-to-treat analysis. And that's nice. I think those are very useful data.
The investigators conducted some subgroup analysis. Again, reassuringly and a little bit defensively, given the ACTG 5202 data,9 it showed that there was a slight decline in virologic efficacy in patients with higher baseline viral loads. That's seen with all the PIs, although, notably, it's consistently not seen with efavirenz. Even when the A5202 outcome definitions were used, there was no great change in efficacy. However, compared to A5202, ARIES is a much smaller study.
This is a helpful poster because we have precious little data on ritonavir-boosted atazanavir in treatment-naive patients, with the exception of the CASTLE data.4 For those of us who for some reason have to take a pass on the tenofovir/emtricitabine and reach for abacavir/lamivudine, it's reassuring and helpful to know that there are short-term, 36-week data on that particular combination of abacavir/lamivudine and ritonavir-boosted atazanavir.
We talked about something old and new, and now for something borrowed or, more accurately, something redone, a do-over. The MERIT study is another one of these comparative, head-to-head trials.13 It looked at maraviroc [MVC, Selzentry, Celsentri] versus efavirenz. And going up against efavirenz, one has to tremble a little bit, because nothing ever beats efavirenz.
Earlier, we talked about the raltegravir data, which are pretty impressive, considering how well efavirenz does in clinical trials. Maraviroc went up against the giant slayer and did not seem to do as well against efavirenz as many had hoped it would. This was mildly disappointing to those of us who were seeking another drug to use up front.
MERIT is a large study with 740 people enrolled.13 It's also a powerful study: At 48 weeks, 65% of the patients who were on maraviroc had a viral load of less than 50 copies/mL, compared to 69% of those on efavirenz. With a study of this size, that is a meaningful difference.
Of note: Everyone got zidovudine/lamivudine [AZT/3TC, Combivir] on this study. So the difference was 4.2% overall, with a confidence interval that just exceeded the outer bounds for non-inferiority. The outer bound was 10% and the 95% confidence interval for this difference was 10.9%. This means there was enough of a chance that maraviroc was not non-inferior to efavirenz. Again, disappointing results.
Subsequently, the investigators have tried to explain this finding by looking at mitigating factors. This is sort of their version of hanging chads. Is there something that we can look at -- given that the results were so, so darn close -- that can help push it over the edge?
Primarily, what they have been doing is looking to see whether there were folks who were enrolled in the study who were not really R5 tropic.
They did screen -- using the older Trofile test -- for R5 virus. You could only get into the study if you had R5 virus at screening.
What they first tried to do was look at the people who were found not to have R5 virus during the study. Those patients eventually were determined to have dual/mixed virus, and were excluded from the analysis. That did lead to some impact on the results. It didn't completely explain all the virologic failures, however. It did show that there was a better response when those people were eliminated, but it didn't explain every single one of those virologic failures that were reported during the study.
In the present analysis,14 what the investigators have now done is apply the enhanced Trofile test to the screening specimens that were collected during the study. The enhanced Trofile is sort of like the Lexus of Trofile tests. It has a 30-fold increase in sensitivity to detect minority variance. The researchers took the 721 people who entered the study and looked at their screening results. These are all people who, on the old test, were found to have R5 virus. Well, lo and behold, when you use the enhanced test, 15% (106 out of 721 patients) were discovered to have non-R5 virus at screening, and these were all dual/mixed.
These investigators asked: What happens if you exclude these people from the primary analysis? When you do so, the numbers change: 68% of the patients on maraviroc got a viral load of less than 50 copies/mL, compared to 68% of those on efavirenz. So it levels it completely.
In fact, there were similar data presented from an ACTG trial of vicriviroc [SCH 417690; SCH-D], another R5 inhibitor.15 That showed a very similar sort of response, that if you exclude people who really, truly aren't R5 tropic, lo and behold, you get a better response when you use an R5 antagonist.
But unfortunately, you don't get a second chance to make a first impression. And many have it in their minds that maraviroc is not a contender for first-line status. These results may sway some, but are they going to sway them up and over the barriers to the use of this drug? The barriers include both the need for a screening test, like the Trofile assay, and its cost. I'm not sure. I think it's still going to be a challenge for this drug to elbow its way in, given the data that we're seeing for competitor drugs, including newer drugs like raltegravir.
Now for a completely different topic: It's not a randomized clinical trial, but I think the NA-ACCORD [North American AIDS Cohort Collaboration on Research and Design] study16 is also worth noting, since it is one of the most important studies to have been presented at this conference.
This study is a very ambitious effort to look at data collected over a number of different cohorts and combined to try to determine if there's any difference between starting HIV therapy at a CD4+ cell count above 350 cells/mm3, but below 500 cells/mm3, and starting HIV therapy at around 350 cells/mm3 as is recommended.
There was a lot of press about this, and I think appropriately so, given we really don't have a lot of data that helps us understand when we should initiate therapy. We know that people who start HIV therapy with a CD4+ cell count of less than 200 cells/mm3 do not do as well -- as far as morbidity and mortality -- as those who start with a CD4+ cell count over 200 cells/mm3. There are other data that indicate that, if you parse it out and look at patients who start at CD4+ cell counts of 200 cells/mm3 to 350 cells/mm3, you see that they do a little bit better than people who start at lower CD4+ cell counts, but you don't see a much greater difference, or any greater benefit, for those who start at CD4+ cell counts that are more than 350 cells/mm3.
I think it's important for us to get more data. There is a randomized clinical trial looking at when to start that randomizes people to either start therapy at a CD4+ cell count of at least 500 cells/mm3 or defer therapy. This present analysis that I'm going to discuss tries to emulate that clinical trial setting by looking at those who started therapy with a CD4+ cell count at 350 cells/mm3 or less, and those who started therapy with a CD4+ cell count that was higher than 350 cells/mm3, but less than 500 cells/mm3.
Just for background: The NA-ACCORD is a collaboration of 22 different cohorts from the United States and Canada. It's prospective and ongoing, and collects data on a regular basis that were used to inform this analysis.
All the study patients were HIV-infected people who had a CD4+ cell count of 351 cells/mm3 to 500 cells/mm3 and who were in active follow-up between 1996 and 2006. All these people went on to receive HIV therapy.
All patients were treatment naive when they were looked at in this analysis. The outcome measure was all-cause mortality. The investigators wanted to see what happens with mortality specifically when you initiate therapy at once versus if you defer therapy until the CD4+ cell count drops. The comparative groups being looked at are those who started therapy at CD4+ cell count between 351 cells/mm3 to 500 cells/mm3 versus those who deferred therapy.
The researchers used a number of different sophisticated models to do this, and analytical techniques to try to reduce bias. They included thousands of patients. Over 8,000 patients were studied, with over 24,000 person-years of follow-up. Again, patients were mostly male. About 40% were white, meaning 60% were non-white. A good amount, about 20%, had injection drug use. A quarter to a third had hepatitis C coinfection. So, a pretty representative example of what we're seeing in our clinics.
When the investigators examined data comparing mortality for the two groups, they found that the hazard ratio for mortality was about 1.7 -- meaning there was a 70% increased risk of mortality by deferring HIV therapy.
These are striking data and really, I think, are the strongest data we have currently that indicate that there may very well be a difference when you look at people with a CD4+ cell count between 350 cells/mm3 and 500 cells/mm3, compared to those who start with a CD4+ cell count of less than 350 cells/mm3. This is a robust data set with a lot of detail and a large number of patients. So I think this is going to be hard to ignore, caveats and all about how the analyses were done.
They did find that there were some other risk factors that might also play into this. The strongest was age, and that makes sense. The older you are, the more likely it was that you would have mortality in the time frame of follow-up. But HIV therapy deferral was the most significant, and the most powerful, risk factor that was reported.
They looked at a number of different things to try to establish some balance and control. They showed that adherence to therapy was pretty much equal among those patients who started with a CD4+ cell count that was higher versus those who ultimately started with a CD4+ cell count that was lower. They looked at injection drug use and controlled for that. There are a number of analytical techniques that are going into this to try to say that when you look at the group that deferred therapy -- accounting for all the things that we can account for -- there still is this effect, and we think it's due to delayed therapy.
Another significant aspect of this is that in other studies where you look at people who start therapy at the get-go at a CD4+ cell count of less than 350 cells/mm3 versus those who start therapy at more than 350 cells/mm3, there's a risk for lead-time bias. That is, for those who you're looking at with a lower CD4+ cell count, you're only getting the survivors. You're not getting the people who didn't live long enough to start their HIV therapy. By including people with higher CD4+ cell counts, and following them with this data set, I think we get a much better picture of the risk of that delay, because now we can count and include those people who actually don't make it to initiate. So if you're only looking at people who initiate at a CD4+ cell count of 320 cells/mm3, well, that excludes all the people who didn't make it to 320 cells/mm3 because they had their heart attack, or they had their hepatic failure, or they had their stroke, or even their opportunistic condition, like a cancer.
I think this is much more thorough, and much more informed, than some of the data we have had from other cohort studies. Clearly, though, the bottom line is that we're going to need to see the results from a randomized trial, as well. Will these results be enough to change practice? I think we have to see more. This is one presentation I think that when it gets into print will be very helpful.
There are still some questions I have about how clear we are about confounding variables. There are maybe things that are very hard to measure that can contribute to morbidity and mortality amongst patients who defer therapy -- chaos in their life; insurance status; other things that may have a bearing on whether they start HIV therapy, but could also increase their risk for doing poorly even before they do start therapy.
So I'm interested in that, and I think that will have to be fleshed out more in this particular study. Overall, I think that these are profound and provocative results from this particular group. Undoubtedly, we'll hear more about this topic in the next coming weeks and months.
In summary, I think there were some important data that were presented at this conference. It's hard -- with so many conferences diluting out some of the data -- to have home runs at every single one of these conferences. I think the NA-ACCORD data16 is certainly of that caliber. The head-to-head trials1-5,9,12,14 help us understand even more about the potential advantages and liabilities of the therapies we are using. The raltegravir data1 are certainly welcome. There's more than enough room for another new drug to come in and be an alternative to the drugs that we're using now, and I think this opens up again the options for our patients, ourselves and our clinics.
This transcript has been lightly edited for clarity.
|Please note: Knowledge about HIV changes rapidly. Note the date of this summary's publication, and before treating patients or employing any therapies described in these materials, verify all information independently. If you are a patient, please consult a doctor or other medical professional before acting on any of the information presented in this summary. For a complete listing of our most recent conference coverage, click here.|