“You can disagree without being disagreeable”

-Ruth Bader Ginsburg

We appreciate Editor Miller’s invitation to provide some remarks on a recent commentary submitted by Gonzalez et al. We reflected for some time on whether to dedicate our effort and time to countering the inaccuracies in the commentary but feel that the errors, contradictions, inconsistencies, and tone of the commentary, unfortunately demand our attention. Accordingly, and because journal space is precious, we take a brief moment to provide our thoughts on their work and highlight some key differences – doing so in a constructive way—the way the community of scholars is meant to operate.

The authors raise several points regarding our study: (1) the design itself was flawed and did not include more than 1 year of data for comparison; (2) the analytical methodology was non-conventional and appeared to be biased in favor of detecting a significant association; and (3) the authors’ conclusions in the manuscript are contradictory and inaccurate findings were disseminated. We address each of them in turn.

Our paper was intentionally designed to look at short-term effects – a point we did not hide since the words ‘short-term’ were clearly articulated in the title. Additionally, we reminded readers in our Discussion section of the need for a longer follow-up time period. Gonzalez et al. indicate that we did not account for non-linear or seasonal effectsFootnote 1 and therefore this hampered the analyses. To make their claim, Gonzalez et al. provide a graph of data—without statistical analyses—to show trends over time, aggregated by week. Referencing week 1 as the January 1–7 period in each year, they conclude “that 2020 trends were largely similar to those observed in 2019 and 2018, with slightly fewer incidents reported in January 2020 compared to prior years…An increase in family violence incidents was observed between April 1-15 (weeks 14-16) each year.”

If we examine the week 14–16 period in their Figure 1 graph, the green line (which is the top-most line for the year 2020) shows almost 350 incidents, yet the 2019 and 2018 lines show a little over 300. So, it seems that nearly 50 more incidents of family violence occurred during 2020 that did not occur in the two previous years during the week 14–16 period, using their own data. Somehow, we believe that a 16% increase in the number of incidents in 2020—and we are talking about victims—is something that is highly relevant and should not be cast aside. Moreover, we do not dispute anything with respect to the January period, and are still puzzled as to why that is even relevant.

Gonzalez et al. (p. 6) go on to state “the time between April 1 and April 15 does not coincide with the implementation of the stay at home executive order (March 23).” That is incorrect. The stay-at-home order started on March 24 and our calculation would indicate that the immediate two-week period following the order does coincide with the implementation of the order—and that is precisely what we observe in our study. This two-week period is an important point that we will return to shortly.

Gonzalez et al. also claim that their descriptive data “demonstrated that the effects of family violence noted to occur in a temporarily proximal manner with Dallas County’s stay-at-home order were likely seasonal trends that occurred at roughly the same time each year” (p. 6). Two things to be noted here. First, the authors did not perform any statistical analysis to draw such a conclusion. The authors failed to adequately test for seasonal trends while their own commentary reads “seasonal trends can easily be analyzed if robust data cleaning techniques and appropriate statistical analyses are conducted” (p. 5). Gonzalez et al. continue by saying “we need to ensure that we are using the most robust methods possible to address questions related to important and timely research questions” (p. 10). Instead, Gonzalez et al. were content to “illustrate the seasonal trends that exist in the data that were used by Piquero et al. (2020)” (p. 5) by presenting “descriptive data from the same police department” (p. 5), and then go on to claim to have “findings of clear seasonal effects” (p. 6). We are perplexed that they would suggest that appropriate statistical analyses should be conducted when they did no such analyses in their commentary. Rather, they rely on a subjective – and not objective – interpretation of the appearance of the data. Second, as noted above, the two-week period the authors highlight in 2020 actually shows more incidents than in either 2019 or 2018. Thus, the authors do not provide a convincing argument that our analyses are as flawed as they believe them to be.

Finally, Gonzalez et al. note that “any future attempts to suggest that stay-at-home orders cause an increase in family violence should use multiple years of data and model seasonal trends” (p. 6). Here we must pause to note that this comment and even the title of their commentary substantially misconstrue the conclusions of our study by using the terms “cause” and “causally”. In numerous ways, we cautiously noted that the observed spike in domestic violence incidents may not be due to stay-at-home orders but could be “associated with … people working from home, being furloughed” (Piquero et al., 2020, p. 617). Furthermore, that they suggest the use of multiple years of data is bewildering since it contradicts other existing Covid-19/domestic violence studies that the authors themselves cite. For example, Leslie and Wilson (2020a, p. 3) used data from 14 American cities and found “a 9.7% increase in domestic violence calls for service during March and April, starting before state-level stay-at-home mandates began” (see also Leslie and Wilson, 2020b). Moreover, their broad statement even contradicts the World Health Organization (2020), who noted increases of domestic violence worldwide during the COVID pandemic and other domestic violence studies that they do not cite but could have given their availability such as Qin, Yam, Xu and Zhang (2020) who also used a January 1, 2020 start period to show increases in domestic violence, and Mohler et al. (2020) who found that domestic violence calls for service also increased in Los Angeles and Indianapolis using a date range similar to ours (between January 2, 2020 and the middle of April 2020 (April 18 in Los Angeles and April 21 in Indianapolis)). So, presumably the commentators would raise the same concerns about the Mohler study, but surprisingly they opted not to cite it despite it being available in the literature for many months.

Most importantly, Gonzalez et al. critique that we did not use multiple years of data and failed to model seasonal trendsFootnote 2 even contradicts a recently published paper by two of the authors of the comment. A recent paper by Jetelina, Knell and Molsberry (2020) looked at changes in intimate partner violence during the early stages of COVID-19 in a self-report survey of social media users that relied on a response rate of 5.7%. Leaving aside the potential sample bias issue and more to the point, the time period used in the Jetelina et al. (2020) study was a 14-day period in April 2020 that did not take seasonal trends (from years prior) into account. That two authors of the commentary would question the methodological approach of our work because we did not look at seasonal trends yet perform a similar analysis that itself does not take into account seasonality strikes us as disingenuous. Interestingly enough, one of the conclusions of that study – there were several just like ours – was that the “odds of worsening victimization during the pandemic was significantly higher among physical and sexual violence” (p.1).

The next critique concerned the statistical power to detect a 12.5% increase. We take no issue with Hawley and their conclusions that in an ideal world interrupted time series should have more cases and more data points, a scenario not very common in time series data. But we contend that we do have enough time series points to preliminarily investigate short-term changes—in fact we use days while Gonzalez et al. use a different time period in their own work (weeks). After all, if a researcher wants to investigate a short-term change, what else are they to do?

A few other points about their concern over statistical power are worth noting. First, our article never referred to a 12.5% increase. That was a separate analysis performed at the request of a media outlet to help put into context a percentage change from a period of time before and after the stay-at-home orders. That 12.5% is an accurate figure and represents the change in the number of incidents in the 3 weeks before the stay-at-home-order to the 3-week period thereafter. This was expressed as such by us when we provided that information. As many people know, researchers oftentimes perform additional data analyses for the media, community groups, and even policymakers for different purposes—none of which are nefarious but help translate findings in a consumable way to an audience that is not comprised of scholars and statisticians. Different people need different data in different ways for different purposes. And engaging with the media or policymakers sometimes means that you provide additional or supplemental information that may be different from what was in an article, but not inconsistent with it. After all, talking about regression coefficients is not commonplace in the media and is an ineffective way to translating results to a general audience.

Second, as readers know, statistical power is a complicated issue generally, and even more so in interrupted time series analysis as it depends on a variety of variables. Although a body of work exists on power analysis for interrupted time series designs, it yields no definitive answers (McCleary, McDowall and Bartos, 2017). The findings of Hawley et al. apply to the specific situation that they considered, and this is a different situation than the one in our paper. Those authors were interested in describing “the power associated with the mean sample size per time point to detect a change in 1) level and 2) trend in an outcome (cumulative incidence) following a defined intervention in the ITS framework, using ordinary least squares (OLS) regression” (p. 198, emphasis added). We were not considering cumulative incidence. Moreover, our reading of Hawley et al. is quite different. If, as Gonzalez et al. say, “thousands of time points would be necessary to detect an effect size of 15%” (p.7), interrupted time series analysis would almost never be worth undertaking. Hawley and colleagues were referring to the cross-sectional (emphasis added) dimension (the number of cases) at each time point, and their conclusions applied only to the type of situation that they considered.Footnote 3

Further, Gonzalez et al. actually make several careless mistakes and incorrect statements in their own response. First, Gonzalez et al. state “upon closer inspection of the manuscript, none of the associations between the stay at home order and family violence were statistically significant in the first place” (p.7). That is verifiably untrue. As shown in Table 2 (see the part of the Table that shows two break point) and Fig. 3 of our article, “there is statistically significant evidence that the trend in domestic violence changed twice: it increased after March 24th and decreased after April 7th” (Piquero et al., 2020, p. 612). Thus, unlike Gonzalez et al. who only presented a series of line graphs without any statistical evidence, we do find and report the conditions under which we find effects and the conditions under which we do not find effects.

An additional error in their commentary is that they chose to only report two of the coefficients in the two break trend analysis—the ones that were not significant in the table—but they chose to omit the one that was significant and showed an increase, i.e., time between March 24 and April 7. Not only does this show that Gonzalez et al. misinterpreted the trend analysis, (i.e., those insignificant coefficients are simply indicating that on the day of the order being implemented, there was not an increase, while the coefficient for the trend suggests there was a change in the trend, i.e., was positive and increasing), but incidentally also corresponds to the authors’ own 2020 graph that shows that short-term increase also evidenced in our own study. Furthermore, Gonzalez et al. go on to say, regarding the increase in domestic violence that we report, that “family violence offenses would need to increase during COVID-19, which, based on the author’s data, did not occur” (p. 9). Again, this is not the case in one of our analyses (see Table 2 and Fig. 3 of our article).

It also seems that the authors use an entirely different data set/source than we did, at least from what we can tell since they are not explicit about how they collected nor coded family violence whereas we were explicit (Piquero et al., 2020, p. 608). At one point, Gonzalez et al. say that they used “three full years of family violence incident data from DPD [Dallas Police Department]” (p. 6). However, their Figure reports family violence in “Dallas County” and not the city of Dallas, which is the data we analyzed, and their presentation includes only two full years and a partial third year of data. Thus, we are unsure what jurisdiction their data is representing and what the comparison and critique even means if they are analyzing an altogether different geographic area. Despite the apparent differences in the data, the “analysis” in Gonzalez et al. also shows family violence increased in Weeks 14 and 15 (more so in 2020 than in either 2018 or 2019).

A few other points are in order. First, there is a clear tone from the Gonzalez et al. commentary that COVID-19 and its adverse effects did not lead to any changes in domestic violence, or at the very least that the effects on family violence remain “unclear” (p. 2). We could not disagree more. Not only has a recently published study authored by two of the commentators (Jetelina et al., 2020) reported that there was some worsening of intimate partner violence and concluded that “sexual and physical violence was exacerbated during the early stages of the pandemic” (p. 1) and that “the prevalence of IPV overall was slightly higher in the study compared with the general population (18% compared with 12%)” (p. 2), but a number of different studies from throughout the United States and abroad continue to show that domestic and family violence increased during some portion of stay-at-home or social distancing measures—however small and/or short-lived and whether measured with calls for service or official incident data.Footnote 4 As noted above, even the graph presented by Gonzalez et al. shows (the week 14–16 period) that there was some evidence of a spike. Evidence from domestic violence shelters and practitioners also reports an increase in calls (see e.g., Pfitzner, Fitz-Gibbon, Meyer and True, 2020).

In sum, public policy regarding domestic violence within the context of COVID-19 and its policy proscriptions must seriously consider the issue of domestic violence with respect to martialing resources independent of any stay-at-home orders, which we did not say nor suggest, should be altered in any way. Yet, Gonzalez et al. somehow believe that our short-term study may have impacted policy to retract stay-at-home orders. Not at all. Those orders have clearly worked in stalling the spread of COVID-19, as have masks, hand washing, and social distancing. Our interest was merely investigating whether domestic violence increased, a hypothesis that is both theoretically reasonable and of relevance to thousands of domestic violence victims around the world.

Second, when the authors’ commentary was in pre-print format, one of the authors of the commentary was at an institution that employed several of the authors on the Piquero et al. study and several other individuals on the commentary have written with different members of our research team—including in the past few months. Yet, no one from the commentary decided to talk with us individually about publicly commenting on our work. In fact, one of the commentary authors actually reached out and congratulated one of us on our article and asked for insight into our methodological approach, as they were unfamiliar with it, and expressed interest in using it for replication in another city. In the spirit of collegiality, we offered to provide them much of our code in order for them to apply it to their analysis of the “other city.” The individual never followed up on that offer.

We believe that the community of scholars should be a guild that leads by example by producing solid research and striving to present social science data in a way that makes it digestible to those who need it (e.g., politicians, journalists, and the general public). Particularly, in the context of a global pandemic - when lives are at stake – we need to come together to create synergies of effort, collaboration, and mentorship as a discipline to help address urgent social issues and needs. It is imperative that we all recognize the duty we have towards each other, to the academy, and in particular toward emerging scholars who will be on the front lines of research for many years to come.

We are all working with limited data and techniques and many of us are following each other’s methodology, time frame, and the like. We urge the scientific community to engage in positive not negative discussions about how we study important and serious social problems, like domestic violence. We should do so in a cooperative and not combative spirit.