Friday, January 13, 2017

National Academies light up cannabis, again...

https://www.nap.edu/read/24625/chapter/1

Admirable endeavor of considerable value, but some lapses in the form (on occasion) of pedestrian unthoughtful thinking, or perhaps shallow would be a better description of the thinking. Standard "more research is needed" recommendation as might be expected in a National of Academies report, but many unrealistic and uncritical assertions about what is needed. The profile of general recommendations might be a consequence of the "general epidemiology" composition of the team, and little field experience in cannabis research, despite the stellar credentials of all panelists, and the deep depth experience of some of the cannabis researchers (e.g., N. Kaminski).

Something along the lines of this statement recurs:

"No or little systematic review of the published evidence was found."

A requirement for a systematic review is a sufficient number of controlled studies to create a chain of inference, and what exists at present is a set of links of highly uneven quality, and generally too insubstantial to support systematic review.

Why?

During the early 1980s, Mary Monk and I learned of lukewarm NIDA study section enthusiasm for our proposed NIDA epidemiological research on cannabis problems, which had focus on creation of primary care networks for systematic ascertainment of clinically significant cases of what now is called "cannabis use disorder." (See how we adapted this community practice study design for the first "non-specialty clinic" case-control study of the dementia syndromes, in our Sydney, Australia research, Henderson et al., 1992.) That is, there was little wrong about study design or logic. Rather, the study was belittled for its focus on cannabis problems. Later, a member of that study section took me aside and counseled me to start studying "serious drug problems," by which he meant what he was studying -- namely, heroin. Lesson: what looks like the jugular to you can look like a capillary to others.

But there is a more fundamental problem in that the community-based practice research concept we developed was one that allows replication because the design is relatively inexpensive and did not require the more expensive apparatus of drawing a probability sample from a defined study population and then repeated longitudinal measurements, which seems to have been the gold standard for evidence applied here.

All well and good, but the nation's current investments in longitudinal PATH and ABCD studies are not going to yield replications. Designed as massive longitudinal studies, the designs do not set up the multiple replications required to produce multiple links in any chain of inference. Instead, a single study estimate will be produced and that again means not enough evidence for systematic review. Ten years from now, that part of the critique will be the same, when a new NAS panel is convened.

If I have time, I might write some more on this topic of NIDA's relative neglect of relatively inexpensive multi-site studies, each of which "has its own bottom" and can produce a useful estimate in a series of systematic replications, versus its adoption of the massive study approach under the cooperative agreement model, which yields one estimate.

Perhaps someone can comment on a topic I did not have a chance to check. Did they interview or hear presentations from ABCD leaders on what ABCD is doing about measurement of dose, route of administration, between-dosing intervals, etc., of the type that must be faced when studying cannabis in a large sample longitudinal real world context? The panel recommendations about the field's neglect of these variables belies (a) a lack of familiarity with the specific context of cannabis research when DEA has driven the behavior underground, and pretty much did exactly the opposite of what the National Commission on Marihuana and Drug Abuse recommended in the early 1970s? (Side-note: check Senator Sessions' testimony and actions on the federal law enforcement front; back to those "culture wars" later), and (2) practical problems faced in a regulatory environment that thwarts standardization of bioavailability and bioequivalence measurements. Perhaps the best that now might be done in cannabis research is reflected in what the PATH study is able to do for tobacco and nicotine delivery products, which have been more tightly regulated, and have had the advantage of more than a half-century of well-funded NCI and other NIH research. Someone might have contrasted the number of total research dollars allocated across NIH institutes for tobacco and nicotine with the comparable dollar amount for cannabis, and reflect on what should be realistically expected. For example, in a thought experiment, go back to the year of NIH dollar-adjusted "tobacco/nicotine health hazards" epidemiology expenditures when the NIH reached the dollar amount that it has been spent specifically on "cannabis health hazards" epidemiology research (NIH-wide), to date. Then, check the quality and nature of epidemiological evidence on tobacco/nicotine health hazards at that time, in that year, relative to the quality and nature of epidemiological research on cannabis health hazards  at present. E.g., were there systematic reviews of tobacco health effects in that year? I suspect not.

But at the end of the day, the NAS panel has done a service by pointing out that more cannabis research is needed, and they were not silent on the dampening effect of federal policy when it comes to learning about the health hazards of cannabis (which I study) or the potential health benefits (which I encourage others to study).

They had to say something, and in general, what they said was not bad; it was just unrealistic.

[Postscript: The cannabis research dollar total should not include the Monitoring the Future study, which represents NIDA's most substantial investment in study of adolescent and young adult drug use, but the study is psychosocial research in its orientation, and never has produced authoritative evidence on cannabis hazards, due to its psychosocial research orientation. It has and still can produce useful estimates on other things (e.g., how many kids using each year, provided they stay in school; cannabis risk perceptions; school dropout), but it's a pretty meager source of evidence on health hazards, either cross-sectionally or longitudinally, with its longitudinal evidence severely constrained by massive sample attrition and mailed questionnaire measurements. we cannot expect too much in the way of definitive epidemiological evidence on cannabis or other drug health hazards when there is so much missing data in the longitudinal trace, and no serious attempt to assess validity of the mailed questionnaire responses about health status).]

Apologies for typos and possibly an errant URL. Moving fast today.


No comments:

Post a Comment

Comments to this blog are moderated. Urgent or other time-sensitive messages should not be sent via the blog.