Nimrod
Madmaxista
- Desde
- 20 Jun 2009
- Mensajes
- 2.819
- Reputación
- 3.019
Traducción gugueliana (más o menos) del resumen:
La ideación conspirativa es la tendencia de las personas a creer que los eventos y las relaciones de poder están secretamente manipulados por ciertos grupos y organizaciones clandestinas. Muchas de estas conjeturas son ostensiblemente no falsables, carente de pruebas o demostrablemente falsas, sin embargo, la aceptación pública sigue siendo alta. Esfuerzos para convencer al público en general de la validez de los resultados médicos y científicos pueden verse obstaculizados por tales relatos, que pueden crear la impresión de duda o desacuerdo en las zonas donde esté bien asentada la ciencia. Por el contrario, los ejemplos históricos de conspiraciones expuestas existen y puede ser difícil para la gente diferenciar entre afirmaciones razonables y dudosas. En este trabajo, hemos establecido un modelo matemático simple para conspiraciones que implican múltiples actores en el tiempo, lo que da la probabilidad de error en cualquier conspiración dada. Los parámetros para el modelo se estiman a partir de la literatura de ejemplos de escándalos conocidos, y lse exploran os factores que influyen en el éxito y el fracaso conspiración . El modelo también se utiliza para estimar la probabilidad de reclamaciones de algunas creencias conspirativas mantenidas comúnmente; estos son a saber, que los aterrizajes lunares eran falsos, el cambio climático es un engaño, la banderillación es peligrosa y que la cura del cáncer está siendo suprimida por intereses creados. Las simulaciones de estas afirmaciones predicen que el fallo intrínseco sería inminente, incluso con las estimaciones más generosas para la capacidad de mantenimiento de secreto de los participantes-los resultados de este modelo sugieren que las grandes conspiraciones (≥1000 agentes) se convierten rápidamente en insostenibles y propensas al fracaso. La teoría presentada aquí podría ser útil en la lucha contra las consecuencias potencialmente perjudiciales de relatos falsos y anti-ciencia, y el examen de las condiciones hipotéticas en las que la conspiración sostenible podría ser posible.
En la fuente hay gráficos.
Texto en inglés:
Abstract
Conspiratorial ideation is the tendency of individuals to believe that events and power relations are secretly manipulated by certain clandestine groups and organisations. Many of these ostensibly explanatory conjectures are non-falsifiable, lacking in evidence or demonstrably false, yet public acceptance remains high. Efforts to convince the general public of the validity of medical and scientific findings can be hampered by such narratives, which can create the impression of doubt or disagreement in areas where the science is well established. Conversely, historical examples of exposed conspiracies do exist and it may be difficult for people to differentiate between reasonable and dubious assertions. In this work, we establish a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy. Parameters for the model are estimated from literature examples of known scandals, and the factors influencing conspiracy success and failure are explored. The model is also used to estimate the likelihood of claims from some commonly-held conspiratorial beliefs; these are namely that the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests. Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants—the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure. The theory presented here might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives, and examining the hypothetical conditions under which sustainable conspiracy might be possible.
Introduction
Conspiratorial beliefs, which attribute events to secret manipulative actions by powerful individuals, are widely held [1] by a broad-cross section of society. Belief in one conspiracy theory is often correlated with belief in others, and some stripe of conspiratorial belief is ubiquitous across diverse social and racial groups [2]. These concepts run the gauntlet from the political to the supernatural, and a single working definition is not easy to obtain. We shall clarify the working definition of conspiracy theory here as being in line the characterisation of Sunstein et al [1] as “an effort to explain some event or practice by reference to the machinations of powerful people, who attempt to conceal their role (at least until their aims are accomplished)”.While the modern usage of conspiracy theory is often derogatory (pertaining to an exceptionally paranoid and ill-founded world-view) the definition we will use does not a priori dismiss all such theories as inherently false.
However, even with this disclaimer, there are a disconcerting number of conspiracy theories which enjoy popular support and yet are demonstrably nonsensical. This is particularly true of conspiracies over scientific and medical issues where conspiratorial ideation can lead to outright opposition to and rejection of the scientific method [3]. This can be exceptionally detrimental, not only to believers but to society in general; conspiratorial beliefs over medical interventions such as vaccination, for example, can have potentially lethal consequence [4]. Conspiratorial thinking is endemic in anti-vaccination groups, with those advocating the scientific and medical consensus often regarded as agents of some ominous interest group bent on concealing “the truth”. This becomes a defence mechanism to protect beliefs that are incompatible with the evidence, and unsurprisingly perhaps proponents of such views display not only conspiratorial traits but a litany of reasoning flaws, a reliance on anecdote over data and low cognitive complexity in thinking patterns [5].
Similarly, the framing of climate-change as a hoax creates needless uncertainty in public discourse, and increases the risk of damaging inertia instead of corrective action. The dismissal of scientific findings as a hoax also has a political element; a 2011 study found conservative white males in the US were far more likely than other Americans to deny climate change [6]. Similarly, a UK study found that climate-change denialism was more common among politically conservative individuals with traditional values [7]. The public acceptance of climate-change conspiracy tras*cends the typical wide-ranging domain of conspiratorial belief; a 2013 investigation by Lewandowsky et al [8] found that while subjects who subscribed to conspiracist thought tended to reject all scientific propositions they encountered, those with strong traits of conservatism or pronounced free-market world views only tended towards rejecting scientific findings with regulatory implications at odds with their ideological position.
Challenging dubious anti-science assertions is an important element for constructive social debate, and there is some evidence that challenging such narratives can be successful. Belief in the moon-landing hoax is highly associated with acceptance of other conspiracy theories, but there is some evidence that when presented with scientific evidence critical of this narrative that a significant decrease in support for that theory ensues [9]. Previous investigation has also shown that improved communication of knowledge of the scientific consensus can also overcome some conspiratorial thinking on issues as diverse as the link between HIV and AIDs to acceptance of climate-change [10].
Of course, it is worthwhile to take a considered Devil’s advocate approach—there are numerous historical examples of exposed conspiracies and scandals, from Watergate to the recent revelations on the sheer scale of spying on the online activity of citizens by their own governments. It would be unfair then to simply dismiss all allegation of conspiracy as paranoid where in some instances it is demonstrably not so. There is also merit to charges that vested interests can distort and confuse public perception—in the case of climate-change, for example, conservative demagogues have succeeded in casting a perception of doubt on robust science in public discussion [8, 11–14]. Evidently an approach which dismisses these very real concerns out of hand and without due consideration is not good enough, and there must be a clear rationale for clarifying the outlandish from the reasonable.
Something currently lacking that might be useful is a method for ascertaining the likelihood that a conspiracy is viable, and the factors that influence this. The benefits of this would be two-fold; firstly, it would allow one to gauge whether a particular narrative was likely and what scale it would have to operate at. Secondly, and perhaps more usefully, it would help counteract potentially damaging anti-science beliefs by giving an estimate of viability for a conspiracy over time. The parameters for this model are taken from literature accounts of exposed conspiracies and scandals, and used to analyse several commonly held conspiracy theories, and examine the theoretical bounds for the magnitude and time-frame of any posited conspiracy theory.
0.1 Anti-Science conspiracy narratives—A brief overview
Conspiracy theories which posit some nefarious underhanded action by scientists are ubiquitous. In these work, we shall restrict our focus to four prominent beliefs of this genre. These are listed below.
NASA Moon-landing conspiracy—The successful 1969 Apollo 11 mission first put men on the moon, a seminal achievement in human history. Yet even since that historic day, there has been a persistent fringe belief group that strongly believe the moon-landings were faked, mocked up for propaganda purposes. In 2013 it was estimated that 7% of Americans subscribe to this view [15]. Those advocating this conspiracy claim there are inconsistencies in pictures taken on the moon’s surface, despite these claims being comprehensively debunked [16].
Climate change conspiracy—Climate-change denial has a deep political dimension [7, 8]. Despite the overwhelming strength of evidence supporting the scientific consensus of anthropogenic global warming [17], there are many who reject this consensus. Of these, many claim that climate-change is a hoax staged by scientists and environmentalists [18–20], ostensibly to yield research income. Such beliefs are utterly negated by the sheer wealth of evidence against such a proposition, but remain popular due to an often-skewed false balance present in partisan media [20, 21], resulting in public confusion and inertia.
Vaccination conspiracy—Conspiratorial beliefs about vaccination are endemic in the anti-vaccination movement [18, 22]. It is estimated that roughly 20% of Americans hold the long de-bunked notion that there is a link between autism and the MMR vaccine [15], a belief which has reduced uptake of important vaccinations [22] in several countries. Anti-vaccination beliefs and scare-mongering are also endemic in the internet age, with vaccine critical websites asserting dubious information [23, 24]. Ill-founded beliefs over vaccination have been darkly successful in stirring panic and reducing vaccine uptake, which has led to damaging resurgence in diseases such as measles [4].
Cancer cure conspiracy—The belief that a cure for cancer is being withheld by vested interests is a long-standing one [25]. It is often used as a universal deus ex machina for those pushing an alternative alleged cure, and assertion of the conspiracy theory functions as an explanatory device to explain the complete paucity of clinical evidence for such claims [26]. Such claims can be detrimental to patients, some of whom abandon conventional treatment for the lofty but ill-founded promises of alternative medicine [27].
Methods
1.1 Model derivation
We initially assume that for a given conspiracy, conspirators are in general dedicated for the most part to the concealment of their activity. We further assume that a leak of information from any conspirator is sufficient to expose the conspiracy and render it redundant—such leaks might be intentional (in the form of whistle-blowing or defection) or accidental (mistaken release of information). We concern ourselves only with potential intrinsic exposure of the conspiracy and do not consider for now the possibility that external agents may reveal the operation. Thus, it ***ows that the act of a conspiracy being exposed is a relatively rare and independent event. We can then apply Poisson statistics, and express the probability of at least one leak sufficient to lead to failure of the conspiracy as (1) where ϕ is the miccionan number of failures expected per unit time. This is in turn a function of number of conspirators with time N(t) and p, the intrinsic probability of failure per person per year. Then we may specify ϕ by (2) and writing ψ = 1 − p for brevity, the probability of conspiracy failure can be re-written as a function of time, given by (3) There are several possibilities for the parameter N(t), the number of conspirators—the appropriate selection will depend on the type of conspiracy involved. If a conspiracy requires constant upkeep then the number required to sustain the fiction is approximately constant with time. This pertains to situations where some active input in either covering up an event or maintaining a deception is vital. In such a case, the number involved takes a simple form of (4) where No is the initial number of conspirators. If instead the conspiracy is a single event after which no new conspirators are required then over time those involved will die off, reducing probability of exposure. If this is the case, a Gompertzian survival function can be employed for the function N(t). If the average age of the conspirators at the moment of the event is te, then (5) where No is the initial number of involved individuals, and α and β are function constants for the Gompertzian curve. For humans, we can use α = 10−4 and β = 0.085 [28] to approximate human mortality. Finally, if conspirators are rapidly removed due to internal friction or otherwise (an action itself which is arguably a meta-conspiratorial event), there may be circumstances where we can model N(t) as an exponential decay. If members are removed rapidly with only half remaining after a period t2, then the decay constant is and the number of conspirators at a given time is (6) It is important to note that Eq 6 pivots on the assumption that rapid removal of conspirators doesn’t change the per conspirator probability of exposure—this assumption may not hold in practice and is refined in the discussion section. From Eq 3 it is clear that increasing N(t) will always act to increase L(t) no matter what form is chosen for conspirator density. The failure rate with time is slightly more complicated; for the constant case given in Eq 4, L will increase monotonically with time. If instead non-constant forms are used, such as those in Eqs 5 and 6, L is non-linear with time, as illustrated in Fig 1. The time at which L is a maximum in these cases, tm, is given by solving , which yields the indentity (7) This equation is tras*cendental and cannot be solved analytically, but can be readily estimated by graphical or numerical techniques. The maximum failure probability is then L(tm), given by Eq 3. The form of N(t) shapes the dynamics of the problem markedly, as shown in Fig 1.
Fig 1. Projected failure probability L for a conspiracy of 5000 initial conspirators and p = 5 × 10−6 with different population assumptions.
The blue sold line depicts L over time with a constant level of conspirators being maintained. The red dotted line shows a single event with Gompertzian decay of the conspiring population, assuming an average initial age of 40 years old and the dashed orange line shows an exponential decay with number of conspirators being halved every 10 years. In the first case, the likelihood of conspiracy failure always increases with time. In the Gompertzian case, the chances of failure initially increase towards a maximum (L = 0.38 after 29 years in this example), but the death of conspirators with time acts to decrease probability of failure after this. Finally, if conspirators are removed extrinsically, then the curve hits a maximum (L = 0.12 after 14 years) before decaying to lower likelihoods as less conspirators exist to betray confidence.
PLOS ONE: On the Viability of Conspiratorial Beliefs
1.2 Parameter estimation
To use the model, realistic parameter estimates are required. In particular, the parameter p, the probability of an intrinsic leak or failure, is extremely important; if p were zero, absolute conspiracy would be maintained, only resolvable by extrinsic analysis. In practice, this is not the case—historical examples show that even in incredibly secretive organizations, there is always some possibility of an accidental or intentional intrinsic leak whether by whistle-blowing or ineptitude. By definition, details of conspiracy are rarely known but we may very conservatively estimate parameters using data from exposed examples where sufficient data on duration and number of conspirators is publicly available. The three examples used here are namely
The National Security Agency (NSA) PRISM affair—The staggering extent of spying by the NSA and its allies on civilian internet users [29] was exposed by contractor Edward Snowden in 2013. The extent of the eavesdropping was unprecedented, including the tapping of fiber-optic cables, phone calls from allied heads of state and a huge amount of meta-data [30].
The Tuskegee syphilis experiment—In 1932 the US Public Health Service began an observational study on African-American men who had contracted syphilis in Alabama. The study became unethical in the mid 1940s, when penicillin was shown to effectively cure the ailment and yet was not given to the infected men. Ethical questions about the research were raised in the mid 1960s, and finally exposed by researcher Dr. Peter Buxtun in 1972 [31–33].
The Federal Bureau of Investigation (FBI) forensic scandal—Dr. Frederic Whitehurst wrote hundreds of letters to his superiors detailing the pseudoscientific nature of many of the FBI forensics tests. The dubious nature of these protocols resulted in a large number of innocent men being detained for decades, several of whom were executed for these crimes or died in prison, before Whitehurst exposed the debacle in 1998. A subsequent report by the FBI and Department of justice found that at least 26 of the 28 dedicated hair analysts gave misleading testimony, prompting an on-going massive re-evaluation of unsafe convictions [34, 35].
With data available from these events, we can estimate values for p conservatively. We assume that after duration t when conspiracies are uncovered that, their probability of failure stands at L ≥ 0.5. A lower-bound for p is then given by (8)
There is considerable and unavoidable ambiguity on some of these estimates, especially on the number of people with full knowledge of the event. In the PRISM case, the figure of 30,000 comes from total NSA staff. In reality, the proportion of those employed would would have knowledge of this program would likely be a lot less but we take the upper bound figure to minimize the estimate of p. Given the short time-frame involved, we further assume the number of conspirators stayed approximately constant over the duration before the event was exposed. The situation is even more complicated regarding the Tuskegee experiment. This originally fell under the remit of the venereal diseases division of the United States Public Health Service (USPHS) in the early 1930s, before this department was restructured in later years. Historical employment levels for the USPHS are not readily available, so the estimation of 6700 is taken from data for current officer staff levels of the entire USPHS. This likely over-estimates the number involved substantially, which historically would have chiefly concerned only the much smaller venereal disease division. The FBI forensics scandal is also difficult to quantify; while 28 agents were specifically involved with the microscopic hair analysis debacle [39], Dr Whitehurst’s whistle-blowing exposed much wider scale problems affecting the entire forensics department. Accordingly, we have used the modern estimate of FBI forensic staff both scientific and agency. Taking a larger value for N tends to over-estimate the ability of a mass of conspirators to retain a secret, yet it allows us to set an extreme lower bound for p, the failure odds per unit time per conspirator. This essentially yields a “best-case” scenario for the conspirators.
In addition to this, the life-time of the conspiracy is not always clear—in the NSA case, estimates span only a narrow range, between 5 and 6 years [29]. The Tuskegee experiment is more ambigious; the original experiment commenced in the 1930s but did not become unethical until the late 1940s, when the decision was made to deny penicilin to the afflicted individuals. There were also ethical questions raised by others before Dr. Peter Buxten, but we use 1972 as our upper-limit as it was his whistle-blowing that focused attention on the long-running abuses. Finally, the FBI forensics time-frame is rather opaque—the FBI forensics laboratory was established in 1932, and naively we could take the conspiracy life-time as 66 years before exposure in 1998, in which case this would push the estimate of p down by roughly an order of magnitude to p > 2.11 × 10−5. Yet this is unrealistic, as the problems with certain aspects of nascent criminology were unlikely to have been known. However, between 1992 and 1997 Dr. Whitehurst penned several hundred letters to this superiors about gaping problems with aspects of the analysis, which were roundly ignored. It ***ows that the FBI were aware from at least 1992 that their forensic methods were untenable, giving a life-time until exposure of only 6 years. In all cases, we take the largest realistic value of t as this pertains to the best-case scenario for a conspiracy.
1.3 Experimental method
The model established allows estimation of how certain parameters influence the success or failure chance for any conspiracy. From Table 1, assuming the derived best-case scenario value for the conspirators (p = 4.09 × 10−6), we can apply the model outlined to several popular and enduring conspiracy theories and ascertain their viability with time. As discussed in the previous section, this estimate is intentionally optimistic for conspirators, and corresponds to a case where the average expected number of fatal leaks for a conspiracy is as low as roughly 4 in a million. In keeping with “best case scenario” estimates for conspiracies, we also neglect the upper figure of p = 2.45 × 10−4, which is roughly 60 times greater than the minimum projected probability of failure per conspirator per year as outlined in Table 1.
Table 1. Known and derived parameters.
PLOS ONE: On the Viability of Conspiratorial Beliefs
Results
Table 2 lists non-exhaustive estimations of the number of conspirators required for the anti-science belief outlined. Critically, the estimates for N(t) shown here assume all scientists involved would have be aware of an active cover-up, and that a small group of odious actors would be unable to deceive the scientific community for long timescales; the rationale for this assumption is expanded further in the discussion section. In most of these cases, constant up-keep would be required to maintain secrecy, so N(t) = No. In the case of the NASA hoax conjecture, it could be argued that the conspiracy was a single-event fiction, and thus the Gompertzian population form in Eq 5 could apply. This is not a very realistic assumption, but is considered here too. The climate-change conspiracy narrative requires some clarification too; those sceptical of the scientific consensus on anthropogenic climate change may take either a “hard” position that climate-change is not occurring or a “soft” position that it may be occurring but isn’t anthropogenic. For this investigation, we’ll define climate change conspiracy as those taking a hard position for simplicity. Results are shown in Fig 2. From this, we can also determine the maximum time-scales before imminent failure under best-possible conditions for these conspiracies, taken as L > 0.95. These estimates are given in Table 3.
Table 2. Non-exhaustive estimates of minimum numbers needed for conspiracy.
PLOS ONE: On the Viability of Conspiratorial Beliefs
thumbnail
Fig 2.
Failure curves for (a) NASA moon-landing hoax—results for both constant population and Gompertzian function are so close as to be non-resolvable visually (b) Climate change hoax—The blue solid line depicts failure probability with time if all scientific bodies endorsing the scientific consensus are involved, the red-dotted line presents the curve if solely active climate researchers were involved (c) Vaccination conspiracy—blue solid line showing failure probability with time for a combination of public health bodies and major drug manufacturers and the red-dotted line depicting case if only public health bodies were conspiring (d) Failure with time for a suppressed cancer cure conspiracy.
Table 3. Maximum time to imminent failure (L > 0.95).
PLOS ONE: On the Viability of Conspiratorial Beliefs
Discussion
The analysis here predicts that even with parameter estimates favourable to conspiratorial leanings that the conspiracies analysed tend rapidly towards collapse. Even if there was a concerted effort, the sheer number of people required for the sheer scale of hypothetical scientific deceptions would inextricably undermine these nascent conspiracies. For a conspiracy of even only a few thousand actors, intrinsic failure would arise within decades. For hundreds of thousands, such failure would be assured within less than half a decade. It’s also important to note that this analysis deals solely with intrinsic failure, or the odds of a conspiracy being exposed intentionally or accidentally by actors involved—extrinsic analysis by non-participants would also increase the odds of detection, rendering such Byzantine cover-ups far more likely to fail. Moreover, the number of actors in this analysis as outlined in Table 2 represent an incredibly conservative estimate. A more comprehensive quantification would undoubtedly drive failure rate up for all considered conspiracy narratives.
This problem appears insurmountable for any large conspiracy; if it requires constant upkeep (N(t) ≈ No) then odds of failure approach unity with time. If we assign a detection threshold under which a conspiracy should remain (μ = 0.05) in a time-frame, then Table 4 enumerates the maximum number of conspirators possible. Even for a relatively short time of 5 years, the limit is hit with only 2521 agents. To sustain it for more than 10 years, less than 1000 people can be involved even with the generous estimate of p = 4.09 × 10−6 derived in this work. Even for single-events with Gompertzian population decay, the problem of large conspiracy failure is not adequately circumvented—for such an event, the odds of failure exceed 5% at around 650 participants even with the ideal value of p and an average age of participants of 40 years. In this situation however, failure probability eventually falls as the population involved decrease, meaning that the threshold can be considered a maximum probability of detection in this scenario. This probability also rapidly increases with number of conspirators involved, rendering large sustained conspiracies unlikely. Under ideal circumstances, it would only be possible to keep a single conspiratorial event below detection thereshold if the number of actors involved was very small (≪ 1000).
Table 4. Maximum number of conspirators to stay below threshold (μ ≤ 0.05).
http://dx.doi.org/10.1371/journal.pone.0147905.t004
As outlined in the section on parameter estimation, estimates used here were deliberately selected to be maximally conducive to conspirators; the lowest values for p obtained were used for estimates, but the highest value was roughly two orders of magnitude above this. If this estimate is instead used, it would have a very stark effect, hugely decreasing time-frame to detection as depicted in Fig 3. Given the lack of clarity in getting precise numbers and time-frames, there is inherent uncertainty in this work on the estimated parameters and better estimates would allow better quantification of p. There is also an open question of whether using exposed conspiracies to estimate parameters might itself introduce bias and produce overly high estimates of p—this may be the case, but given the highly conservative estimates employed for other parameters, it is more likely that p for most conspiracies will be much higher than our estimate, as even relatively small conspiracies (such as Watergate, for example) have historically been rapidly exposed. It is also important to note that p will likely vary markedly for different conspiracies, depending on how deeply invested agents are invested in a given conspiracy and the figures here are at best a conservative approximation of typical values. However, even if agents are highly invested in a conspiracy, p also includes the odds of an accidental intrinsic exposure. While conspiracies do undoubtedly happen, their continued secrecy is probably more due to keeping the number of agents low than having an intrinsically small per agent per time leak probability.
Fig 4.
Failure curves for a conspiracy of No = 5000 over a 50 year period with exponential removal of conspirators with half-life t2 of 5 years () with (a) assumption of constant p (b) proportional change in probability p(t) = po eλt.
http://dx.doi.org/10.1371/journal.pone.0147905.g004
The theory outlined is useful in predicting the broad patterns expected from a conspiracy event, but does not consider the dynamics, motivations and interactions of individual agents. This interplay might be an avenue for future work, perhaps employing agent based models to account for the various internal frictions and pressures affecting the gross failure rate. The approach outlined here might give some insight into the gross behaviour of conspiracies, but agent based modelling focused on individual actors interacting with certain probabilities might better capture the intricacies of conspiracy and whistle-blowing. Such models could also readily be informed by psychological data, ascribing simulated actors a spectrum of traits, with specific interaction rules to see whether the emergent dynamics affect the success or failure of any secretive event.
While challenging anti-science is important, it is important to note the limitations of this approach. Explaining misconceptions and analysis such as this one might be useful to a reasonable core [9], but this might not be the case if a person is sufficiently convinced of a narrative. Recent work has illustrated that conspiracy theories can spread rapidly online in polarized echo-chambers, which may be deeply invested in a particular narrative and closed off to other sources of information [61]. In a recent Californian study on parents, it was found that countering anti-vaccination misconceptions related to autism was possible with clear explanation, but that for parents resolutely opposed to vaccination attempts to use rational approach further entrenched them in their ill-founded views [62]. The grim reality is that there appears to be a cohort so ideologically invested in a belief that for whom no reasoning will shift, their convictions impervious to the intrusions of reality. In these cases, it is highly unlikely that a simple mathematical demonstration of the untenability of their belief will change their view-point. However, for the less invested such an intervention might indeed prove useful.
Acknowledgments
Many thanks to Dr. Frederic Whitehurst for this helpful first-hand insight into the FBI forensics scandal. As the author is a physicist rather than a psychologist, I am indebted to Profs. Stephan Lewandowsky and Ted Goertzel and Ms. Mathilde Hernu for their valuable input on conspiratorial thinking. Thanks also to Drs. Ben Goertzel and David Basanta for their comments and suggestions, and to the reviewers for their helpful comments and suggestions. I would also like to thank my University of Oxford colleagues for their continued support, in particular Dr. Mike Partridge. This work did not require specific funding, from nebulous clandestine cabals or otherwise.
Author Contributions
Conceived and designed the experiments: DRG. Performed the experiments: DRG. Analyzed the data: DRG. Contributed reagents/materials/analysis tools: DRG. Wrote the paper: DRG.
La ideación conspirativa es la tendencia de las personas a creer que los eventos y las relaciones de poder están secretamente manipulados por ciertos grupos y organizaciones clandestinas. Muchas de estas conjeturas son ostensiblemente no falsables, carente de pruebas o demostrablemente falsas, sin embargo, la aceptación pública sigue siendo alta. Esfuerzos para convencer al público en general de la validez de los resultados médicos y científicos pueden verse obstaculizados por tales relatos, que pueden crear la impresión de duda o desacuerdo en las zonas donde esté bien asentada la ciencia. Por el contrario, los ejemplos históricos de conspiraciones expuestas existen y puede ser difícil para la gente diferenciar entre afirmaciones razonables y dudosas. En este trabajo, hemos establecido un modelo matemático simple para conspiraciones que implican múltiples actores en el tiempo, lo que da la probabilidad de error en cualquier conspiración dada. Los parámetros para el modelo se estiman a partir de la literatura de ejemplos de escándalos conocidos, y lse exploran os factores que influyen en el éxito y el fracaso conspiración . El modelo también se utiliza para estimar la probabilidad de reclamaciones de algunas creencias conspirativas mantenidas comúnmente; estos son a saber, que los aterrizajes lunares eran falsos, el cambio climático es un engaño, la banderillación es peligrosa y que la cura del cáncer está siendo suprimida por intereses creados. Las simulaciones de estas afirmaciones predicen que el fallo intrínseco sería inminente, incluso con las estimaciones más generosas para la capacidad de mantenimiento de secreto de los participantes-los resultados de este modelo sugieren que las grandes conspiraciones (≥1000 agentes) se convierten rápidamente en insostenibles y propensas al fracaso. La teoría presentada aquí podría ser útil en la lucha contra las consecuencias potencialmente perjudiciales de relatos falsos y anti-ciencia, y el examen de las condiciones hipotéticas en las que la conspiración sostenible podría ser posible.
En la fuente hay gráficos.
Texto en inglés:
Abstract
Conspiratorial ideation is the tendency of individuals to believe that events and power relations are secretly manipulated by certain clandestine groups and organisations. Many of these ostensibly explanatory conjectures are non-falsifiable, lacking in evidence or demonstrably false, yet public acceptance remains high. Efforts to convince the general public of the validity of medical and scientific findings can be hampered by such narratives, which can create the impression of doubt or disagreement in areas where the science is well established. Conversely, historical examples of exposed conspiracies do exist and it may be difficult for people to differentiate between reasonable and dubious assertions. In this work, we establish a simple mathematical model for conspiracies involving multiple actors with time, which yields failure probability for any given conspiracy. Parameters for the model are estimated from literature examples of known scandals, and the factors influencing conspiracy success and failure are explored. The model is also used to estimate the likelihood of claims from some commonly-held conspiratorial beliefs; these are namely that the moon-landings were faked, climate-change is a hoax, vaccination is dangerous and that a cure for cancer is being suppressed by vested interests. Simulations of these claims predict that intrinsic failure would be imminent even with the most generous estimates for the secret-keeping ability of active participants—the results of this model suggest that large conspiracies (≥1000 agents) quickly become untenable and prone to failure. The theory presented here might be useful in counteracting the potentially deleterious consequences of bogus and anti-science narratives, and examining the hypothetical conditions under which sustainable conspiracy might be possible.
Introduction
Conspiratorial beliefs, which attribute events to secret manipulative actions by powerful individuals, are widely held [1] by a broad-cross section of society. Belief in one conspiracy theory is often correlated with belief in others, and some stripe of conspiratorial belief is ubiquitous across diverse social and racial groups [2]. These concepts run the gauntlet from the political to the supernatural, and a single working definition is not easy to obtain. We shall clarify the working definition of conspiracy theory here as being in line the characterisation of Sunstein et al [1] as “an effort to explain some event or practice by reference to the machinations of powerful people, who attempt to conceal their role (at least until their aims are accomplished)”.While the modern usage of conspiracy theory is often derogatory (pertaining to an exceptionally paranoid and ill-founded world-view) the definition we will use does not a priori dismiss all such theories as inherently false.
However, even with this disclaimer, there are a disconcerting number of conspiracy theories which enjoy popular support and yet are demonstrably nonsensical. This is particularly true of conspiracies over scientific and medical issues where conspiratorial ideation can lead to outright opposition to and rejection of the scientific method [3]. This can be exceptionally detrimental, not only to believers but to society in general; conspiratorial beliefs over medical interventions such as vaccination, for example, can have potentially lethal consequence [4]. Conspiratorial thinking is endemic in anti-vaccination groups, with those advocating the scientific and medical consensus often regarded as agents of some ominous interest group bent on concealing “the truth”. This becomes a defence mechanism to protect beliefs that are incompatible with the evidence, and unsurprisingly perhaps proponents of such views display not only conspiratorial traits but a litany of reasoning flaws, a reliance on anecdote over data and low cognitive complexity in thinking patterns [5].
Similarly, the framing of climate-change as a hoax creates needless uncertainty in public discourse, and increases the risk of damaging inertia instead of corrective action. The dismissal of scientific findings as a hoax also has a political element; a 2011 study found conservative white males in the US were far more likely than other Americans to deny climate change [6]. Similarly, a UK study found that climate-change denialism was more common among politically conservative individuals with traditional values [7]. The public acceptance of climate-change conspiracy tras*cends the typical wide-ranging domain of conspiratorial belief; a 2013 investigation by Lewandowsky et al [8] found that while subjects who subscribed to conspiracist thought tended to reject all scientific propositions they encountered, those with strong traits of conservatism or pronounced free-market world views only tended towards rejecting scientific findings with regulatory implications at odds with their ideological position.
Challenging dubious anti-science assertions is an important element for constructive social debate, and there is some evidence that challenging such narratives can be successful. Belief in the moon-landing hoax is highly associated with acceptance of other conspiracy theories, but there is some evidence that when presented with scientific evidence critical of this narrative that a significant decrease in support for that theory ensues [9]. Previous investigation has also shown that improved communication of knowledge of the scientific consensus can also overcome some conspiratorial thinking on issues as diverse as the link between HIV and AIDs to acceptance of climate-change [10].
Of course, it is worthwhile to take a considered Devil’s advocate approach—there are numerous historical examples of exposed conspiracies and scandals, from Watergate to the recent revelations on the sheer scale of spying on the online activity of citizens by their own governments. It would be unfair then to simply dismiss all allegation of conspiracy as paranoid where in some instances it is demonstrably not so. There is also merit to charges that vested interests can distort and confuse public perception—in the case of climate-change, for example, conservative demagogues have succeeded in casting a perception of doubt on robust science in public discussion [8, 11–14]. Evidently an approach which dismisses these very real concerns out of hand and without due consideration is not good enough, and there must be a clear rationale for clarifying the outlandish from the reasonable.
Something currently lacking that might be useful is a method for ascertaining the likelihood that a conspiracy is viable, and the factors that influence this. The benefits of this would be two-fold; firstly, it would allow one to gauge whether a particular narrative was likely and what scale it would have to operate at. Secondly, and perhaps more usefully, it would help counteract potentially damaging anti-science beliefs by giving an estimate of viability for a conspiracy over time. The parameters for this model are taken from literature accounts of exposed conspiracies and scandals, and used to analyse several commonly held conspiracy theories, and examine the theoretical bounds for the magnitude and time-frame of any posited conspiracy theory.
0.1 Anti-Science conspiracy narratives—A brief overview
Conspiracy theories which posit some nefarious underhanded action by scientists are ubiquitous. In these work, we shall restrict our focus to four prominent beliefs of this genre. These are listed below.
NASA Moon-landing conspiracy—The successful 1969 Apollo 11 mission first put men on the moon, a seminal achievement in human history. Yet even since that historic day, there has been a persistent fringe belief group that strongly believe the moon-landings were faked, mocked up for propaganda purposes. In 2013 it was estimated that 7% of Americans subscribe to this view [15]. Those advocating this conspiracy claim there are inconsistencies in pictures taken on the moon’s surface, despite these claims being comprehensively debunked [16].
Climate change conspiracy—Climate-change denial has a deep political dimension [7, 8]. Despite the overwhelming strength of evidence supporting the scientific consensus of anthropogenic global warming [17], there are many who reject this consensus. Of these, many claim that climate-change is a hoax staged by scientists and environmentalists [18–20], ostensibly to yield research income. Such beliefs are utterly negated by the sheer wealth of evidence against such a proposition, but remain popular due to an often-skewed false balance present in partisan media [20, 21], resulting in public confusion and inertia.
Vaccination conspiracy—Conspiratorial beliefs about vaccination are endemic in the anti-vaccination movement [18, 22]. It is estimated that roughly 20% of Americans hold the long de-bunked notion that there is a link between autism and the MMR vaccine [15], a belief which has reduced uptake of important vaccinations [22] in several countries. Anti-vaccination beliefs and scare-mongering are also endemic in the internet age, with vaccine critical websites asserting dubious information [23, 24]. Ill-founded beliefs over vaccination have been darkly successful in stirring panic and reducing vaccine uptake, which has led to damaging resurgence in diseases such as measles [4].
Cancer cure conspiracy—The belief that a cure for cancer is being withheld by vested interests is a long-standing one [25]. It is often used as a universal deus ex machina for those pushing an alternative alleged cure, and assertion of the conspiracy theory functions as an explanatory device to explain the complete paucity of clinical evidence for such claims [26]. Such claims can be detrimental to patients, some of whom abandon conventional treatment for the lofty but ill-founded promises of alternative medicine [27].
Methods
1.1 Model derivation
We initially assume that for a given conspiracy, conspirators are in general dedicated for the most part to the concealment of their activity. We further assume that a leak of information from any conspirator is sufficient to expose the conspiracy and render it redundant—such leaks might be intentional (in the form of whistle-blowing or defection) or accidental (mistaken release of information). We concern ourselves only with potential intrinsic exposure of the conspiracy and do not consider for now the possibility that external agents may reveal the operation. Thus, it ***ows that the act of a conspiracy being exposed is a relatively rare and independent event. We can then apply Poisson statistics, and express the probability of at least one leak sufficient to lead to failure of the conspiracy as (1) where ϕ is the miccionan number of failures expected per unit time. This is in turn a function of number of conspirators with time N(t) and p, the intrinsic probability of failure per person per year. Then we may specify ϕ by (2) and writing ψ = 1 − p for brevity, the probability of conspiracy failure can be re-written as a function of time, given by (3) There are several possibilities for the parameter N(t), the number of conspirators—the appropriate selection will depend on the type of conspiracy involved. If a conspiracy requires constant upkeep then the number required to sustain the fiction is approximately constant with time. This pertains to situations where some active input in either covering up an event or maintaining a deception is vital. In such a case, the number involved takes a simple form of (4) where No is the initial number of conspirators. If instead the conspiracy is a single event after which no new conspirators are required then over time those involved will die off, reducing probability of exposure. If this is the case, a Gompertzian survival function can be employed for the function N(t). If the average age of the conspirators at the moment of the event is te, then (5) where No is the initial number of involved individuals, and α and β are function constants for the Gompertzian curve. For humans, we can use α = 10−4 and β = 0.085 [28] to approximate human mortality. Finally, if conspirators are rapidly removed due to internal friction or otherwise (an action itself which is arguably a meta-conspiratorial event), there may be circumstances where we can model N(t) as an exponential decay. If members are removed rapidly with only half remaining after a period t2, then the decay constant is and the number of conspirators at a given time is (6) It is important to note that Eq 6 pivots on the assumption that rapid removal of conspirators doesn’t change the per conspirator probability of exposure—this assumption may not hold in practice and is refined in the discussion section. From Eq 3 it is clear that increasing N(t) will always act to increase L(t) no matter what form is chosen for conspirator density. The failure rate with time is slightly more complicated; for the constant case given in Eq 4, L will increase monotonically with time. If instead non-constant forms are used, such as those in Eqs 5 and 6, L is non-linear with time, as illustrated in Fig 1. The time at which L is a maximum in these cases, tm, is given by solving , which yields the indentity (7) This equation is tras*cendental and cannot be solved analytically, but can be readily estimated by graphical or numerical techniques. The maximum failure probability is then L(tm), given by Eq 3. The form of N(t) shapes the dynamics of the problem markedly, as shown in Fig 1.
Fig 1. Projected failure probability L for a conspiracy of 5000 initial conspirators and p = 5 × 10−6 with different population assumptions.
The blue sold line depicts L over time with a constant level of conspirators being maintained. The red dotted line shows a single event with Gompertzian decay of the conspiring population, assuming an average initial age of 40 years old and the dashed orange line shows an exponential decay with number of conspirators being halved every 10 years. In the first case, the likelihood of conspiracy failure always increases with time. In the Gompertzian case, the chances of failure initially increase towards a maximum (L = 0.38 after 29 years in this example), but the death of conspirators with time acts to decrease probability of failure after this. Finally, if conspirators are removed extrinsically, then the curve hits a maximum (L = 0.12 after 14 years) before decaying to lower likelihoods as less conspirators exist to betray confidence.
PLOS ONE: On the Viability of Conspiratorial Beliefs
1.2 Parameter estimation
To use the model, realistic parameter estimates are required. In particular, the parameter p, the probability of an intrinsic leak or failure, is extremely important; if p were zero, absolute conspiracy would be maintained, only resolvable by extrinsic analysis. In practice, this is not the case—historical examples show that even in incredibly secretive organizations, there is always some possibility of an accidental or intentional intrinsic leak whether by whistle-blowing or ineptitude. By definition, details of conspiracy are rarely known but we may very conservatively estimate parameters using data from exposed examples where sufficient data on duration and number of conspirators is publicly available. The three examples used here are namely
The National Security Agency (NSA) PRISM affair—The staggering extent of spying by the NSA and its allies on civilian internet users [29] was exposed by contractor Edward Snowden in 2013. The extent of the eavesdropping was unprecedented, including the tapping of fiber-optic cables, phone calls from allied heads of state and a huge amount of meta-data [30].
The Tuskegee syphilis experiment—In 1932 the US Public Health Service began an observational study on African-American men who had contracted syphilis in Alabama. The study became unethical in the mid 1940s, when penicillin was shown to effectively cure the ailment and yet was not given to the infected men. Ethical questions about the research were raised in the mid 1960s, and finally exposed by researcher Dr. Peter Buxtun in 1972 [31–33].
The Federal Bureau of Investigation (FBI) forensic scandal—Dr. Frederic Whitehurst wrote hundreds of letters to his superiors detailing the pseudoscientific nature of many of the FBI forensics tests. The dubious nature of these protocols resulted in a large number of innocent men being detained for decades, several of whom were executed for these crimes or died in prison, before Whitehurst exposed the debacle in 1998. A subsequent report by the FBI and Department of justice found that at least 26 of the 28 dedicated hair analysts gave misleading testimony, prompting an on-going massive re-evaluation of unsafe convictions [34, 35].
With data available from these events, we can estimate values for p conservatively. We assume that after duration t when conspiracies are uncovered that, their probability of failure stands at L ≥ 0.5. A lower-bound for p is then given by (8)
There is considerable and unavoidable ambiguity on some of these estimates, especially on the number of people with full knowledge of the event. In the PRISM case, the figure of 30,000 comes from total NSA staff. In reality, the proportion of those employed would would have knowledge of this program would likely be a lot less but we take the upper bound figure to minimize the estimate of p. Given the short time-frame involved, we further assume the number of conspirators stayed approximately constant over the duration before the event was exposed. The situation is even more complicated regarding the Tuskegee experiment. This originally fell under the remit of the venereal diseases division of the United States Public Health Service (USPHS) in the early 1930s, before this department was restructured in later years. Historical employment levels for the USPHS are not readily available, so the estimation of 6700 is taken from data for current officer staff levels of the entire USPHS. This likely over-estimates the number involved substantially, which historically would have chiefly concerned only the much smaller venereal disease division. The FBI forensics scandal is also difficult to quantify; while 28 agents were specifically involved with the microscopic hair analysis debacle [39], Dr Whitehurst’s whistle-blowing exposed much wider scale problems affecting the entire forensics department. Accordingly, we have used the modern estimate of FBI forensic staff both scientific and agency. Taking a larger value for N tends to over-estimate the ability of a mass of conspirators to retain a secret, yet it allows us to set an extreme lower bound for p, the failure odds per unit time per conspirator. This essentially yields a “best-case” scenario for the conspirators.
In addition to this, the life-time of the conspiracy is not always clear—in the NSA case, estimates span only a narrow range, between 5 and 6 years [29]. The Tuskegee experiment is more ambigious; the original experiment commenced in the 1930s but did not become unethical until the late 1940s, when the decision was made to deny penicilin to the afflicted individuals. There were also ethical questions raised by others before Dr. Peter Buxten, but we use 1972 as our upper-limit as it was his whistle-blowing that focused attention on the long-running abuses. Finally, the FBI forensics time-frame is rather opaque—the FBI forensics laboratory was established in 1932, and naively we could take the conspiracy life-time as 66 years before exposure in 1998, in which case this would push the estimate of p down by roughly an order of magnitude to p > 2.11 × 10−5. Yet this is unrealistic, as the problems with certain aspects of nascent criminology were unlikely to have been known. However, between 1992 and 1997 Dr. Whitehurst penned several hundred letters to this superiors about gaping problems with aspects of the analysis, which were roundly ignored. It ***ows that the FBI were aware from at least 1992 that their forensic methods were untenable, giving a life-time until exposure of only 6 years. In all cases, we take the largest realistic value of t as this pertains to the best-case scenario for a conspiracy.
1.3 Experimental method
The model established allows estimation of how certain parameters influence the success or failure chance for any conspiracy. From Table 1, assuming the derived best-case scenario value for the conspirators (p = 4.09 × 10−6), we can apply the model outlined to several popular and enduring conspiracy theories and ascertain their viability with time. As discussed in the previous section, this estimate is intentionally optimistic for conspirators, and corresponds to a case where the average expected number of fatal leaks for a conspiracy is as low as roughly 4 in a million. In keeping with “best case scenario” estimates for conspiracies, we also neglect the upper figure of p = 2.45 × 10−4, which is roughly 60 times greater than the minimum projected probability of failure per conspirator per year as outlined in Table 1.
Table 1. Known and derived parameters.
PLOS ONE: On the Viability of Conspiratorial Beliefs
Results
Table 2 lists non-exhaustive estimations of the number of conspirators required for the anti-science belief outlined. Critically, the estimates for N(t) shown here assume all scientists involved would have be aware of an active cover-up, and that a small group of odious actors would be unable to deceive the scientific community for long timescales; the rationale for this assumption is expanded further in the discussion section. In most of these cases, constant up-keep would be required to maintain secrecy, so N(t) = No. In the case of the NASA hoax conjecture, it could be argued that the conspiracy was a single-event fiction, and thus the Gompertzian population form in Eq 5 could apply. This is not a very realistic assumption, but is considered here too. The climate-change conspiracy narrative requires some clarification too; those sceptical of the scientific consensus on anthropogenic climate change may take either a “hard” position that climate-change is not occurring or a “soft” position that it may be occurring but isn’t anthropogenic. For this investigation, we’ll define climate change conspiracy as those taking a hard position for simplicity. Results are shown in Fig 2. From this, we can also determine the maximum time-scales before imminent failure under best-possible conditions for these conspiracies, taken as L > 0.95. These estimates are given in Table 3.
Table 2. Non-exhaustive estimates of minimum numbers needed for conspiracy.
PLOS ONE: On the Viability of Conspiratorial Beliefs
thumbnail
Fig 2.
Failure curves for (a) NASA moon-landing hoax—results for both constant population and Gompertzian function are so close as to be non-resolvable visually (b) Climate change hoax—The blue solid line depicts failure probability with time if all scientific bodies endorsing the scientific consensus are involved, the red-dotted line presents the curve if solely active climate researchers were involved (c) Vaccination conspiracy—blue solid line showing failure probability with time for a combination of public health bodies and major drug manufacturers and the red-dotted line depicting case if only public health bodies were conspiring (d) Failure with time for a suppressed cancer cure conspiracy.
Table 3. Maximum time to imminent failure (L > 0.95).
PLOS ONE: On the Viability of Conspiratorial Beliefs
Discussion
The analysis here predicts that even with parameter estimates favourable to conspiratorial leanings that the conspiracies analysed tend rapidly towards collapse. Even if there was a concerted effort, the sheer number of people required for the sheer scale of hypothetical scientific deceptions would inextricably undermine these nascent conspiracies. For a conspiracy of even only a few thousand actors, intrinsic failure would arise within decades. For hundreds of thousands, such failure would be assured within less than half a decade. It’s also important to note that this analysis deals solely with intrinsic failure, or the odds of a conspiracy being exposed intentionally or accidentally by actors involved—extrinsic analysis by non-participants would also increase the odds of detection, rendering such Byzantine cover-ups far more likely to fail. Moreover, the number of actors in this analysis as outlined in Table 2 represent an incredibly conservative estimate. A more comprehensive quantification would undoubtedly drive failure rate up for all considered conspiracy narratives.
This problem appears insurmountable for any large conspiracy; if it requires constant upkeep (N(t) ≈ No) then odds of failure approach unity with time. If we assign a detection threshold under which a conspiracy should remain (μ = 0.05) in a time-frame, then Table 4 enumerates the maximum number of conspirators possible. Even for a relatively short time of 5 years, the limit is hit with only 2521 agents. To sustain it for more than 10 years, less than 1000 people can be involved even with the generous estimate of p = 4.09 × 10−6 derived in this work. Even for single-events with Gompertzian population decay, the problem of large conspiracy failure is not adequately circumvented—for such an event, the odds of failure exceed 5% at around 650 participants even with the ideal value of p and an average age of participants of 40 years. In this situation however, failure probability eventually falls as the population involved decrease, meaning that the threshold can be considered a maximum probability of detection in this scenario. This probability also rapidly increases with number of conspirators involved, rendering large sustained conspiracies unlikely. Under ideal circumstances, it would only be possible to keep a single conspiratorial event below detection thereshold if the number of actors involved was very small (≪ 1000).
Table 4. Maximum number of conspirators to stay below threshold (μ ≤ 0.05).
http://dx.doi.org/10.1371/journal.pone.0147905.t004
As outlined in the section on parameter estimation, estimates used here were deliberately selected to be maximally conducive to conspirators; the lowest values for p obtained were used for estimates, but the highest value was roughly two orders of magnitude above this. If this estimate is instead used, it would have a very stark effect, hugely decreasing time-frame to detection as depicted in Fig 3. Given the lack of clarity in getting precise numbers and time-frames, there is inherent uncertainty in this work on the estimated parameters and better estimates would allow better quantification of p. There is also an open question of whether using exposed conspiracies to estimate parameters might itself introduce bias and produce overly high estimates of p—this may be the case, but given the highly conservative estimates employed for other parameters, it is more likely that p for most conspiracies will be much higher than our estimate, as even relatively small conspiracies (such as Watergate, for example) have historically been rapidly exposed. It is also important to note that p will likely vary markedly for different conspiracies, depending on how deeply invested agents are invested in a given conspiracy and the figures here are at best a conservative approximation of typical values. However, even if agents are highly invested in a conspiracy, p also includes the odds of an accidental intrinsic exposure. While conspiracies do undoubtedly happen, their continued secrecy is probably more due to keeping the number of agents low than having an intrinsically small per agent per time leak probability.
Fig 4.
Failure curves for a conspiracy of No = 5000 over a 50 year period with exponential removal of conspirators with half-life t2 of 5 years () with (a) assumption of constant p (b) proportional change in probability p(t) = po eλt.
http://dx.doi.org/10.1371/journal.pone.0147905.g004
The theory outlined is useful in predicting the broad patterns expected from a conspiracy event, but does not consider the dynamics, motivations and interactions of individual agents. This interplay might be an avenue for future work, perhaps employing agent based models to account for the various internal frictions and pressures affecting the gross failure rate. The approach outlined here might give some insight into the gross behaviour of conspiracies, but agent based modelling focused on individual actors interacting with certain probabilities might better capture the intricacies of conspiracy and whistle-blowing. Such models could also readily be informed by psychological data, ascribing simulated actors a spectrum of traits, with specific interaction rules to see whether the emergent dynamics affect the success or failure of any secretive event.
While challenging anti-science is important, it is important to note the limitations of this approach. Explaining misconceptions and analysis such as this one might be useful to a reasonable core [9], but this might not be the case if a person is sufficiently convinced of a narrative. Recent work has illustrated that conspiracy theories can spread rapidly online in polarized echo-chambers, which may be deeply invested in a particular narrative and closed off to other sources of information [61]. In a recent Californian study on parents, it was found that countering anti-vaccination misconceptions related to autism was possible with clear explanation, but that for parents resolutely opposed to vaccination attempts to use rational approach further entrenched them in their ill-founded views [62]. The grim reality is that there appears to be a cohort so ideologically invested in a belief that for whom no reasoning will shift, their convictions impervious to the intrusions of reality. In these cases, it is highly unlikely that a simple mathematical demonstration of the untenability of their belief will change their view-point. However, for the less invested such an intervention might indeed prove useful.
Acknowledgments
Many thanks to Dr. Frederic Whitehurst for this helpful first-hand insight into the FBI forensics scandal. As the author is a physicist rather than a psychologist, I am indebted to Profs. Stephan Lewandowsky and Ted Goertzel and Ms. Mathilde Hernu for their valuable input on conspiratorial thinking. Thanks also to Drs. Ben Goertzel and David Basanta for their comments and suggestions, and to the reviewers for their helpful comments and suggestions. I would also like to thank my University of Oxford colleagues for their continued support, in particular Dr. Mike Partridge. This work did not require specific funding, from nebulous clandestine cabals or otherwise.
Author Contributions
Conceived and designed the experiments: DRG. Performed the experiments: DRG. Analyzed the data: DRG. Contributed reagents/materials/analysis tools: DRG. Wrote the paper: DRG.