The multiple contradictions observed in the CARICOM report are striking

Dear Editor
A CAREFUL examination of the “Report of the CARICOM observer team for the recount of the Guyana March 2, 2020 elections” has raised a number of important concerns about the structure, process, and the outcome of the analysis described in the 132-page document.
First, it is unclear whether the team’s categorical observation, that their report was based on “…an audit” and therefore “…was not in fact a recount” is consistent with what is described elsewhere in the document. For example, “Overall, while we acknowledge that they were some defects in the recount of the March 02, 2020 votes cast for the General and Regional Elections in Guyana, the Team did not witness anything that would render the recount … from reflecting the will of the voters. The actual count of the vote was indeed transparent” (p.2). There is a need to redefine what “transparent” means within the context of a report that was limited in its observation. Also, an assessment of the percentage of time that the word recount (approximately 92 per cent) was used relative to an audit (about eight per cent) lend to the conceptual confusion of the approach taken in this report.

Additionally, the observation that “GECOM’s elaborate and unnecessary checklist (see Appendix IV). A checklist which was unnecessarily/excessively burdensome and which was suggestive of an audit rather than a recount” highlights the conceptual confusion about the approach the team adopted in their work. This issue, by itself, is not especially problematic, however, given that a recount is clearly a type of an audit, one wonders about the potential impact that an unclear framework might have had on the process and the outcome of this report.

Second, the constraints that the Team described in the document (see page five, para. one) included the number of Team members (n=3) which resulted in only 18.09 per cent or 423 work stations being observed in the process. This methodological approach or “The Recount Strategy” is fundamentally flawed but was nonetheless the framework that formed the basis for the conclusions reached by the CARICOM Team. The implications of this unscientific methodology are far reaching. The apriori assumptions about the adequacy of the selected size of the CARICOM Team are brought into question and should be considered when assessing the value that is placed on this limited and therefore incomplete report. A report is as good as the research design or in this instance, “The Recount Strategy” that it uses. Parenthetically, when was it known “… that it was virtually impossible…” for the team to adequately deploy across the work stations? More importantly, as is described later, are the decisions that were made in the face of the unfortunate reality of the limitations that confronted the Team not consequential? The Team’s solution of this problem is at the heart of the challenges that are inherent in the findings posited in this report.

Third, the discussion about the selection of the specific work stations (n=423 or 18.09 per cent) does not point to an approach that can be trusted, at least in terms of the results of the analysis. Was the choice of the work stations based on established principles of random selection? This document does not even attempt to deal with this well-known problem in making inferences from a sample (i.e., n=423) to the general population from which the sample was drawn. Additionally, if one grants the team a passing grade on the assumption “… that it was virtually impossible …” to actually observe 100 per cent of the available work stations, it is virtually irresponsible to eliminate from the report a description of steps that were taken to ensure that their work is representative of the Guyanese electorate. So, the reader is left asking what if any of the basis principles of random sampling were considered or implemented in this design. If the criteria for any type of random sampling approach are not met, then the platform on which the results is built is fragile and potentially deadly to any attempt at a “free and fair election” process. Thus, in the face of the violation of well-established guidelines for analysis of data, the results cannot be trusted. The failure to describe the criteria used to select work stations within and across each region destroys any confidence that one could reasonably have in the process and the outcome of the work of the CARICOM Team.

Fourth, the percentage of the overall ballot boxes observed included the following: Region Four (37 per cent), Region Three (18 per cent), Region Six (14 per cent), Region Seven (26.82 per cent) and Region Nine (24.65 per cent). These percentages show no consistent pattern that considered the relative number of ballot boxes by region. If a selection of stations were based on some statistical approach that accounted for the relative size of the region then one might have been more confident that the CARICOM sample could be trusted. The core issue across all the observations about the methodology and the results of the ‘recount’ is the flawed design of the work of the CARICOM Team. Moreover, the irony of this report is that the solution (i.e., The CARICOM Team) has created more problems than clear answers to the issues it sought to resolve. This, in the absence of systematic approach to a process that was intended to engender confidence in an election process, the electorate is left with a report that is conceptually flawed and statistically imprecise.

Finally, this is clearly a report that should be viewed more as an attempt to create a new methodology that blatantly disregards standard and best in class approaches to ‘process audit’ and or an electoral recount. As a result, we have a report that excludes data from over 80 per cent of the possible work stations, thereby rendering conclusions from an ‘audit’ that is based on an unscientific selection of 18.09 per cent of the work stations useless at best. Overall, the multiple contradictions observed throughout the document is striking. In addition to those observed above, the essence of the inconsistent description on page 24, paragraph one, is emblematic of similar contradictory statements in the document. That is, the team suggested that the GECOM staff “… for the most part” were “well trained in the basic procedural matters” while admitting that “… it was also evident that there were varying degrees of efficiency and effectiveness of the staff.” These evaluation concepts capture the ideas of ‘doing things right’ (efficiency) and ‘doing the right things’ (effectiveness). If in fact, ‘varying degrees of efficiency and effectiveness…’ were found in the staff that were observed, then it should follow that there were ‘varying degrees’ of the doing things right and of doing the right thing. Clearly, there is a need to consider the impact of these and other identified “basket” of concerns on the outcome of the election.

Regards,
Richard Van West-Charles

Source: https://issuu.com/guyanachroniclee-paper/docs/guyana_chronicle_e-paper_6-17-2020