Written by Sara Richter and Angie Ficek
What it is
Our stats team recently partnered with five undergraduate students from St. Olaf College to determine if there is a better way to estimate the percent of participants from a tobacco cessation program who quit using tobacco, which we call a “quit rate”, and its variance. The project took place during St. Olaf’s J-term with juniors and seniors who are accepted into the math practicum course. The math practicum course divides the students into small groups, and each group works with a local organization to answer a specific stats-related question posed by the organization.
In our evaluations of tobacco cessation programs, we estimate a quit rate as the responder rate, or the percent of people who responded to the follow-up survey and who quit, and we apply a standard two-sided 95% confidence interval. This method poses a bit of a problem in that non-responders are more likely to still be using tobacco, so the responder rate can be an overestimate of the true quit rate. In the past, this has not been overly worrisome as the percent of people consenting to participate (the consent rate) was very high across all programs (>90%) and the percent of consenting participants who responded to the follow-up survey (the response rate) was also high (>50%). However, recently many of the tobacco cessation programs that we evaluate have experienced declining consent rates and declining response rates to the follow-up surveys. This can lead to inflated quit rates and confidence intervals that are inaccurate. We were preparing to investigate these trends and their impact on our quit rate calculations when St. Olaf approached us about the collaboration.
What we did
The students worked countless hours throughout January to get to the bottom of this dilemma. Using a statistical program, R, they created a simulation that allowed them to evaluate different methods of calculating the quit rate and its corresponding confidence interval under a variety of consent and response rates. We checked in with them periodically throughout January and provided feedback on their methods, but they ultimately did all of the legwork. In the end, they provided us with a report, complete with statistical code in R, and a presentation that provided an answer to our question.
Where we go from here
The students identified some interesting trends and came to some eye-opening conclusions that we plan to discuss further internally and publish. We may rethink our methods for calculating quit rates, which could impact how quit rates are calculated for tobacco control programs throughout the country.
We felt the collaboration with the St. Olaf students was a mutually beneficial endeavor. It provided them with a real-world project with messy data, and they contributed analyses and results that could potentially be used to make recommendations that have a nationwide impact on how quit rates are calculated. It allowed us to delegate a non-billable task that might have taken us six months to complete. It required some set-up time on our end to get the data organized and think through the problem and how to present it to them, but the amount and quality of the work that the students produced in the time given was well worth it.