By Ted Bruce
The debate over the mandatory use of bike helmets has predictably heated up with the arrival of ideal biking weather. And there is an abundance of population level data being tossed around to buttress both sides of the argument. On one side is the position that mandatory bike helmet use discourages ridership and it is high ridership that creates safer riding environments – thus mandatory helmet laws might impede reductions in injury rates. This argument also suggests that by discouraging cycling, society does not achieve the benefits of more people participating in active transportation. On the other side, there is data showing the lower injury rates associated with helmet use and little evidence that mandatory laws have dampened the uptake of bicycling as form of transport.
There has been an impressive array of data to bolster both sides of the debate. The importance of epidemiology to good policy making cannot be underestimated. But the debate highlights the challenge in securing the quality of evidence we need to advocate for prevention strategies.
Of particular concern is the considerable lag time between a prevention intervention and the measureable benefits it is intended to produce. This is compounded by the reality that the intervention is only one of many interventions that co-exist in a messy policy world. How does one assess the benefits of a specific intervention in a complex, real world, uncontrolled environment where there are many confounding variables that are affecting the measured outcome? In addition, prevention interventions have an underlying problem with generating data since if they are impactful they have prevented an occurrence and there may be limited mechanisms to collect data within our routine normal data collection systems.
There are a number of research methods such as cross jurisdictional studies that can assess the benefits of prevention interventions as we see in the case of the debate on bike helmet laws. Rates of injury and disease, for example, can be compared between jurisdictions that implement a prevention strategy and those that don’t. In spite of more innovative methods, many of these studies are still open to challenge because of the differences in real world contexts that are being compared. Fortunately there is an ever increasing body of quality evidence available to argue the case for prevention. Check out this source of best practice reviews: Canadian Best Practices Portal (Public Health Agency of Canda).
While it is never wise to base decisions on individual cases that are not representative of the larger population, the importance of case studies and narrative analysis can be quite informative and convincing. As we all know, it is the “stories” of real people that often influence decisions whether good evidence is available or not.