Why Academic Papers’ Abstracts Should Not Be Trusted, and Why Academic Publishing Should Not Be Regulated
Academic papers’ abstracts are sometimes manipulated to display what is seemingly a good result. The large profits in online publishing, especially the open-access (OA) type, are largely (but unfairly) criticized, with endless calls for regulation. In reality, the problem of monopolistic prices and quasi-monopolistic status of the most prestigious online publishers is not due to lack of regulation, but due to a combination of copyright law, OA mandates, and moral hazard driven by subsidies.
CONTENT
It is predictable that the average readers don’t even bother to read more than the abstract, because they “lack” time (but they don’t “lack” time reading and posting memes). On the other hand, it is also more “productive” to cite more studies by merely reading the abstract because most readers are not familiar with the advanced statistics used and even if they are, they might just be “lazy”. Copium prevails, with the belief that the scientists know what they are doing and that the peer reviews are competent enough. The reality is that most research don’t replicate, many reports in the abstract are misleading, and that there is no such a thing as widespread consensus in research (Bornmann et al., 2010).
My own strategy has always been, for many years, to read the papers, focusing on the method and result sections, and discussion section whenever an alternative interpretation is needed. I use no shortcuts such as reading other review papers because most of the time, they provide very little details and are almost never critical of the studies, and I usually find flaws where other reviews find none. This is however time consuming, and only nerds, freaks or zombies select this route.
1. Bad abstracts are intentional and widespread
Several reports highlighted some serious discrepancies between statements in abstract and the actual findings across various fields or journals, such as psychiatry and psychology (Harris et al., 2002; Jellison et al., 2020), pharmacy (Ward et al., 2004), medicine and biomedicine (Pitkin et al., 1999; Estrada et al., 2000; Boutron et al., 2010; Lazarus et al., 2015; Li et al., 2017), lung cancer (Altwairgi et al., 2012), rheumatology (Mathieu et al., 2012), pediatric orthopedic (Jones et al., 2021; Kamel & El-Sobky, 2023), low back pain research (Nascimento et al., 2020). Misleading reports appear to be quite severe in medical journals. In biomedical research, industry-funded publications showed a very high proportion of misleading abstract, while none were found in nonindustry-funded publications (Alasbali et al., 2009). In psychiatry and psychology journals, industry funding is not associated with increased odds of abstract spin (Jellison et al., 2020). Articles reporting negative results are more likely to contain misleading abstract conclusions (Mathieu et al., 2012), probably due to publication bias against negative results (Olson et al., 2002; Fanelli et al., 2010). A non-trivial portion of reproductive medicine studies report p-values without effect sizes in their abstracts (Feng et al., 2024). Statistically significant outcomes have a higher odds of being fully reported (Chan et al., 2004a, 2004b). A more pervasive form of reporting bias is the selective reporting of analyses, which could arise from changes in the specification of the composite outcome between abstracts, methods, and results sections, as well as multiple other discrepancies in methods between protocols and publications (Dwan et al., 2014). Indeed, the focus on positive findings in the abstract is strong whenever multiple outcomes have been studied (Duyx et al., 2019). These problems are compounded by the much higher chance for positive results (i.e., studies with successfully proven hypothesis) to be accepted by journals and to be cited in the literature (Mlinarić et al., 2017; Scherer et al., 2018). Pressure to publish increases bias, since the frequency of positive results in the abstract and/or full-text is higher in more competitive and productive academic environments (Fanelli et al., 2010). The strong focus on positive results in abstracts will also cause bias in systematic reviews due to how systematic search is conducted (Scherer et al., 2018; Duyx et al., 2019). The bias for positive results may explain why 70% of all medical and biological researchers failed to reproduce other researchers’ results (Baker, 2016) or why 50% of top cancer studies fail to replicate (Mullard, 2021).
Boutron & Ravaud (2018) illustrate several malpractices that attempt to present the study more favorably, such as hypothesizing after results are known (HARK) or justifying after results are known (JARK). And more generally:
Rhetoric, defined as language designed to have a persuasive or impressive effect, can be used by authors to interest and convince the readers (5). Any author can exaggerate the importance of the topic, unfairly dismiss previous work on it, or use persuasive words to convince the reader of a specific point of view (40, 41). Based on our and others’ experience (40, 41), a typical article might declare that a certain disease is a “critical public health priority” and that previous work on the topic showed “inconsistent results” or had “methodologic flaws.” In such cases, the discussion will inevitably claim that “this is the first study showing” that the new research provides “strong” evidence or “a clear answer”; the list of adjectives and amplifiers is large. Some of these strategies are actually taught to early career researchers. A retrospective analysis of positive and negative words in abstracts indexed in PubMed from 1974 to 2014 showed an increase of 880% in positive words used over the four decades (from 2% in 1974–1980 to 17.5% in 2014) (42).
2. So many kinds of misleading reports
Boutron & Ravaud (2018) explained that there are many ways to embellish the findings within the article. Misleading displays of figures, which includes scaling, lack of Confidence Intervals, a break in Y-axis, projecting curves, and other advanced alterations of images. Various practices to manipulate the p-values, including an interim analysis to decide whether an experiment or a study should be stopped prematurely, post hoc exclusion of outliers from the analysis, decision to combine or split groups, adjust covariates, perform subgroup analysis, or to choose the threshold for dichotomizing continuous outcomes.
Below is a sample of misleading reports among many that I came across, some of which I suspect to be completely deliberate.
Statistical sleight of hands
A common sleight of hand among economists is the emphasis on relative effect, overshadowing the absolute effect which provides a proper context to evaluate the real effect size. Here’s an example among many. Deming et al. (2016) conducted an experimental study by submitting fictitious resumes to real job openings, and wrote in their abstract that “a business bachelor’s degree from a for-profit online institution is 22% less likely to receive a callback than one from a nonselective public institution”. Reading their result section, this figure in reality is a relative effect, derived from the absolute effects of 9% and 7% callback rate. The difference is much less impressive than “advertised” in their abstract. The likely intent is to show that for-profit colleges perform much worse than public colleges and therefore should be regulated.
P-hacking
Stefan & Schönbrodt (2023) identified 12 p-hacking strategies used to achieve false positive results: selective reporting of the dependent variable, of the independent variable, optional stopping, outlier exclusion, controlling for covariates, scale redefinition, variable transformation, discretizing variables, exploiting alternative hypothesis tests, favourable imputation, subgroup analyses, incorrect rounding. They found that the p-hacking severity increases with the number of tests conducted as well as the dissimilarity between the datasets subjected to the tests, that the combination of p-hacking strategies increases the rate of false positive results, and that effect sizes are overestimated under the presence of p-hacking.
This relates to Bakker et al.’s (2012, Figures 3-4) finding that “the use of several small underpowered samples often represents a more efficient research strategy (in terms of finding p < .05) than does the use of one larger (more powerful) sample”. In their simulation, about half of the investigated psychological studies showed bias (calculated as the difference between the estimated ES and the true ES).
Selective reporting
Bagde et al. (2016) analyzed affirmative action (AA) programs based on a quota system in India. They wrote in the abstract that the “program increases college attendance of targeted students, particularly at relatively higher-quality institutions” but nothing about the outcomes. Yet their results clearly show that college quality had no impact on college graduation. This was not mentioned in their abstract, and not even in their conclusion. It is obvious these authors wanted to promote the idea that AA is beneficial by focusing on the attendance at high quality colleges. There are several similar cases of misreporting that I have covered in a previous article.
Questionable methodology
Borghans et al. (2016) analyzed 4 datasets with diverse measures of IQ and wrote in their abstract that “both grades and achievement tests are substantially better predictors of important life outcomes than IQ”. As I explained in detail before, they achieved this result because they used bad IQ measures and the achievement tests they used were actually much better measures of cognitive ability than their selected IQ measures, due to shady definition of what an IQ test should be.
Misapplied statistical criteria
van Soelen et al. (2011, Table 5) analyzed a longitudinal sample and concluded that the heritability of IQ increases from childhood to adulthood. Their report is overall accurate except for performance IQ, which has a heritability of 0.64 based on the reduced AE model. In the full ACE model, heritability was 0.46 and shared environment 0.17, but because the sample size was very small (224+46) it is no wonder why the C parameter was non-significant despite being clearly different from zero. Dropping this “non-significant” shared-environment component obviously increases the heritability, up to 0.64. But this is misleading.
Warne (2023, Table 4) compared the WISC-III scores of Kenyan Grade 8 students in Nairobi schools to the American norm and the WAIS-IV scores of Ghanaian students who showed English fluency at high school or university to the American norm. Measurement invariance (MI) was said to be tenable for both Ghanaian and Kenyan students. But Warne did not use the appropriate cutoffs suggested by simulation studies. If one applies the recommended cutoff ΔCFI≥.005, MI is rejected for the Kenyan sample.
3. Unintended consequences of publishing regulations
As always, when bad business practices and high pricing occur, the Pavlovian response is to blame the markets and call for regulations. For instance, Emil Kirkegaard (2024, Nov. 9) proposed the adoption of three measures to combat the supposed oligopoly and fix online publishing major issues:
The research must be open access.
A ceiling on publication fee.
Materials (data, code, etc.) must be public.
Proposition 1. The common argument is that accessible research spreads knowledge across the world, ultimately helping researchers in low-income countries with poorly funded institutions. Yet enforcing open access will trigger some unintended consequences typically associated with public policies. Here, the high cost of open-access articles will be fully absorbed by the authors, library funds or research grants. This means that poorly funded universities (especially in low-income countries) and self-funded research (due to the topic being highly controversial) will face serious barriers to entry. There is some truth to it (Borrego, 2023; Frank et al., 2023). Some may even leave academia, as Nabyonga-Orem et al. (2020) observed: “African researchers are often left with no option but to pay out of pocket to cover APCs … Low salaries in African universities is a major reason why researchers leave academia for consultancy or migrate to high-income countries.” This problem is further exacerbated considering that government-funded research is much less sensitive to large fees, favoring institutions with large endowments even more. To make things worse, such a high cost necessarily discourages some types of submissions, such as single case reports, exploratory research, and commentaries.
There is a debate over which publishing model is preferable between the subscription and open-access (OA) approach, because it is argued that OA publishing involves Author Publication Charges (APC) and therefore is a potential threat to the integrity of the peer review. With the OA system, the journal revenues depend on the number of published articles which may ultimately result in predatory practices. When this happens, journals may be labeled as predatory, as was the case for MDPI or Frontiers, which may discourage authors from publishing there. Since poorly-funded researchers can’t bear the cost of APC, there should be a demand for both OA and subscription-based journals (see, Lakhotia, 2015). As explained in Section 4, OA mandates will promote predatory publishing. This is why regulations and moral hazard (created by subsidies) should be removed for the market to adopt the combination of features that produces the best outcome, including whether peer reviews should be eliminated or not (Heesen & Bright 2021; Elton, 2024, October 22). Competition is, after all, best described as a dynamic process under which entrepreneurs keep improving their product through novel methods and strategies. Armentano (1982), Folsom (1991), Yu (1998), and DiLorenzo (2005) provide compelling evidence that entrepreneurs typically thrive by adapting and innovating to accommodate the market’s needs in the absence of regulations or moral hazard driven policies.
A related topic is whether OA journals should also disclose the reviews and comments. Open peer review does not compromise the peer-review process (Chawla, 2019). Evidence indicates that open peer review may sometimes increase the time taken to review and increase the likelihood of reviewers declining to review (van Rooyen et al., 1999; 2010; Walsh et al., 2000).
Proposition 2 is particularly dangerous. Anyone with some basic knowledge on economics immediately understands. Mankiw (2024, pp. 112-116) illustrates why the necessary outcome of such price controls, especially price ceiling, is most likely supply shortage. The most egregious case is the rent control:
In many cities, the local government places a ceiling on rents that landlords may charge their tenants. This is rent control, a policy aimed at helping the poor by keeping housing costs low. Yet economists often criticize rent control, saying that it is a highly inefficient way to help the poor. One economist went so far as to call rent control “the best way to destroy a city, other than bombing.”
The adverse effects of rent control may not be apparent because these effects occur over many years. In the short run, landlords have a fixed number of apartments to rent, and they cannot adjust this number quickly as market conditions change. Moreover, the number of people looking for apartments may not be highly responsive to rents in the short run because people take time to adjust their housing arrangements. In other words, the short-run supply and demand for housing are both relatively inelastic.
Panel (a) of Figure 3 shows the short-run effects of rent control on the housing market. As with any binding price ceiling, rent control causes a shortage. But because supply and demand are inelastic in the short run, the initial shortage is small. The primary result in the short run is popular among tenants: a reduction in rents.
The long-run story is very different because the buyers and sellers of rental housing respond more to market conditions as time passes. On the supply side, landlords respond to low rents by not building new apartments and by failing to maintain existing ones. On the demand side, low rents encourage people to find their own apartments (rather than live with roommates or their parents) and to move into the city. Therefore, both supply and demand are more elastic in the long run.
Panel (b) of Figure 3 illustrates the housing market in the long run. When rent control depresses rents below the equilibrium level, the quantity of apartments supplied falls substantially, and the quantity of apartments demanded rises substantially. The result is a large shortage of housing.
In cities with rent control, landlords and building superintendents use various mechanisms to ration housing. Some keep long waiting lists. Others give preference to tenants without children. Still others discriminate based on race. Sometimes, apartments are allocated to those willing to offer under-the-table payments; these bribes bring the total price of an apartment closer to the equilibrium price.
Economic theory therefore predicts lower supply along with decreased quality control of the papers being submitted to the affected journals. There is some evidence that pricing is correlated with some key causal variables. Siler & Frenken (2020) explained that pricing differences across disciplines happen for several reasons: 1) due to the convention in medicine and natural sciences of hiring professional editors to oversee journals as opposed to social sciences and humanities 2) due to academic research involving collaboration via large-scale organizations being more prominent in medicine and the natural sciences than in social sciences and humanities. Rose-Wiles (2011) argued that differential pricing is driven by differential demand, due to science researchers being more likely to publish and cite articles compared to social science and humanities researchers and due to libraries of science and medicine companies being much better funded. Van Noorden (2013) observed that publishers attempted to justify their high running costs due to, e.g., evaluation, checks for plagiarism, peer reviews, editing, typesetting, graphics, formatting, hosting, yet he also noted that, in the open-access world, the higher-charging journals don’t command the greatest citation-based influence (i.e., impact factor). But this may depend on how the impact factor is measured. Indeed, Björk & Solomon (2015) found a correlation of 0.67 between APC and Source Normalized Impact per Paper when weighted by article volumes. There is no denying such a relationship and no denying that citation is highly desirable, yet citation based stats can be rightfully criticized on other grounds, as Serra-Garcia & Gneezy (2020) found that studies that don’t replicate are cited more than those that replicate, while Severin et al. (2023, Figures 4-5) found that the traditional impact factor does not reliably command better peer reviews, and finally Rose-Wiles (2011) noted that citations can be easily manipulated.
This obviously doesn’t address the underlying question: why do academic journals generate so much profit? An examination of the rise in college cost over time will help understand. As I concluded earlier, the rising cost is best explained by Bennett’s hypothesis which postulates that student aid (from the government) makes it possible for colleges to raise their prices. This allows the institutions to charge more because the students are able to bear the cost. Subsidies shift the demand curve upward. Journal pricing is affected by the same forces, just like in other areas. It is therefore not surprising that Hagve (2020) observed the following: “As in many other countries, most of the research funding in Norway comes from the government. Thereby, the government funds all stages of research production, but must then pay again to access the research results.” Fyfe et al. (2017, p. 9) observed that academic publishing has become highly profitable due to “the growth of academic research and the relatively generous funding available for the expanding university library sector: there was more research to be published, but also more institutions able to purchase it”. This gives rise to an inelastic demand and, by the same token, a reduction in competition. This explains why such a prestige effect prevails since the cost is absorbed by the taxes. Unsurprisingly, Morrison et al. (2021) found that authors choose to publish in more expensive journals while Khoo (2019, Table 1) found that higher APCs are not associated with a decrease in article volumes over time. This proves that authors are not sensitive to prices. Thus, subsidies indirectly reduce competition. Market failure theorists such as Edwards & Shulenburger (2003) who complained about unfair competition and inelastic demand obviously failed to understand this law of unintended consequences.
Perhaps the most important factor is the existence of copyright law which sustains the monopolistic power. Bergstrom & Bergstrom (2004) illustrate the irony as follows: “This market power is sustained by copyright law, which restricts competitors from selling “perfect substitutes” for existing journals by publishing exactly the same articles. In contrast, sellers of shoes or houses are not restrained from producing nearly identical copies of their competitors’ products.” A counter-argument is that innovation is not possible without copyright. But that assertion is false.
Proposition 3. Code sharing makes sense, and there are many reasons to share the code, especially the relevant portion that concerns the analysis (LeVeque, 2013). Data sharing is harder to justify for various reasons. Multiple studies found that data sharing does not reduce errors (Nuijten et al., 2017; Claesen et al., 2023) although one study confirmed the relationship (Wicherts et al., 2011). Papers with access to raw data are cited more often (Piwowar et al., 2007; Colavizza et al., 2020), yet researchers still have little incentives to share the data. Researchers have stated their reasons for not sharing data, which include priority of additional publishing, fear of being challenged after data re-analysis, financial interests and being bound by legal agreements not to reveal sensitive data (Tedersoo et al., 2021). The first point makes sense. The funding agencies expect the researchers to publish many papers, as an indicator of productivity. The second point is also understandable. It is likely that many scientists know this dirty secret: most research fails to replicate. Data sharing puts them at the mercy of future criticism. Yet they are not rewarded for sharing the data. This doesn’t mean scientists should not share data, but they currently lack such rewards.
4. The inherent “failure” of free markets
Predatory practice is best crystallized by Open-Access (OA) publishing, which focuses on quantity over quality due to their profits depending on the quantity of published papers. But upon examination, government and institutional funding often include mandates to publish frequently and in prestigious journals. Consistent with this idea, Shen & Björk (2015) argued that “The universities or funding agencies in a number of countries that strongly emphasize publishing in ‘international’ journals for evaluating researchers, but without monitoring the quality of the journals in question [16, 33], are partly responsible for the rise of this type of publishing.” Moreover, predatory publishing is not viable in the long term due to declining reputation. As Shen & Björk (2015) observed: “For instance, the DOAJ has, since 2014, imposed stricter criteria for inclusion and has filtered out journals that do not meet them [35]. Membership in the Open Access Scholarly Publishers Association (OASPA) is also contingent on meeting quality criteria.”
Competition has been skewed by government meddling. Dudley (2021) observed that OA mandates such as “Plan S” have met with resistance both from publishers and scholars. Larivière & Sugimoto (2018) reported that researchers would cite norms and needs within disciplines as a reason not to comply with OA mandates whereas Frank et al. (2023) reported that some researchers avoid the OA system due to unfairness with respect to low-income countries. Indeed, in a free market economy there would be some competition between OA and non-OA publishing depending on the varying needs and funds of the researchers. One would wonder whether the current situation of science publishing is truly an outcome inherent to free markets when the F1000 website makes the following statement:
A compliant OA publication meets the requirements set out in an OA policy introduced by a funder, institution, or government.
All F1000 publishing venues are fully open access and comply and support international open access mandates. As such, open access, immediate publication, and open data are all hallmarks of our own open access policies. Authors that don’t adhere to the requirements set out may fail our pre-publication checks, and their article may be rejected.
Although Dudley (2021) noted that OA articles are read and cited more often than non-OA, causing more authors and publishers/journals to opt for the OA option, this shift was made possible only because of OA mandates. Indeed, Dudley (2021) further noted that “Importantly, mandates for OA reform have led to an increase in the availability of funding for APCs which has further reinforced the prevalence of the pay to publish OA model.” As a result, poorly funded researchers have no choice but to select OA predatory journals that charge low fees but with little care to quality. By discouraging the subscription-based model, the OA mandate gave rise to predatory OA publishing.
This also explains why OA publishing is often criticized for generating profits that are overly disproportionate considering its costs. As noted before, government funded research reduces price sensitivity among researchers. The issue is amplified by OA requirements, due to generating more demand for OA journals and therefore pushing the price upward. Dudley (2021) reminds us that “since 2008, agencies of the U.S. government require research findings to be available on OA platforms”. Similarly, Butler et al. (2023) argued that the growth of OA publishing coincided with the increase in funder OA mandates and policies (e.g., “Plan S”). And recently, Larivière & Sugimoto (2018) observed that some national institutes such as the NIH and the Wellcome Trust both stated that they will withhold or suspend payments if articles are not made open access.
The attack on OA publishing often disregards the publish-or-perish culture that is “encouraged” (i.e., forced) by public agencies. This tendency allows predatory publishing to be much more prevalent because too many papers are submitted, more than reviewers can even handle. Due to lacking volunteers among potential reviewers, the journal eventually decides to skip the reviewing process. Another take is the supposed lack of regulation. For instance, Frank et al. (2023) observed that the phenomenon of predatory publishers is largely concentrated “in a few middle-income countries, often with lax regulatory environments for publishing of any kind” whereas “[A]cademics in the leading western countries are usually not lured by the Siren songs of the predatory journals, and most of the authors are from Africa and Asia, from countries where advancement requires ‘‘international’’ publication, with no quality checks”. Their observation mirrors the findings by Bohannon (2013) who reported that 80% of the OA journals that accepted his fake paper operated in India and Shen & Björk (2015, Figure 8) who reported that 35% of publications in predatory journals is by Indians. Their conclusion is that lack of regulation is the problem, not OA. Frank et al. either did not bother to do their research well or they failed to understand the adverse effects of current regulations. According to India’s University Grants Commission (UGC), for the 2018 clause, the minimum qualification for a professor includes “(iii) A minimum of 10 research publications in peer-reviewed or UGC-listed journals. (iv) A minimum of 110 Research Score as per Appendix II, Table 2” where the research score depends on the impact factor, while the requirement for various promotions includes “A minimum of seven publications in the peer-reviewed or UGC-listed journals out of which three research papers should have been published during the assessment period.” Relatedly, Raju (2013) & Lakhotia (2015) noted that the introduction of the Academic Performance Indicators (API) in 2010 caused an upsurge in the number of papers published in India, likely because UGC sees the number of publications as the major criterion for appointments and tenure promotions. Moreover, Seethapathy et al. (2016) observed that, in India, “90% and 73% of authors considered research publication as an achievement and has academic pressure respectively to publish research articles because publication gives job securities and promotions”. Once again, publications and high research scores are made mandatory, fueling the publish-or-perish culture.
So-called anti-competitive tactics are also pointed out. Major publishers often propose a “big deal” package of journals that are bundled across journals and across print and electronic versions. This typically involves a library entering into a long-term arrangement to get access to a large electronic library of journals at a substantial discount. According to Edlin & Rubinfeld (2005), bundling can be seen as a strategic barrier to entry (i.e., anti-competitive). This is because whenever an incumbent publisher misprices and loses a sale by pricing too high for a school to buy, an opportunity is created for new competitors. But bundling takes advantage of the law of large numbers to limit such pricing “inaccuracies” and with them the opportunities for entrants. Their discussion is worth quoting in full:
The situation is similar for libraries that come up for renewal of the Big Deal. A library subscribing to the complete Elsevier package can cancel all 1,800 Elsevier titles and buy a la carte. If the alternative publisher is only offering a single new journal, it is unlikely the library would cancel the Elsevier titles to get the new journal. Entry is certainly more difficult, but should the publishers’ behavior be deemed to be exclusionary, or should the publisher be seen as competing on the merits?
We believe that bundling is a good candidate to be judged exclusionary and anticompetitive. Excluding entrants by charging low prices is generally considered competition on the merits: it is favored from a public-policy vantage because customers gain from the low prices. On the other hand, because the total bundle price is so much higher than the sum of the marginal prices, Big Deal bundling excludes entrants without providing this kind of benefit to buyers. A complete answer to the question of whether Big Deal bundling is exclusionary rests in part on whether entry of new journals would be substantially easier if publishers did not bundle and only sold journals on an individual basis. The answer presumably depends in part on the decisions of librarians as to whether and to what extent they would allocate more funds toward new and alternative publications if they could achieve proportionate savings from cancelled subscriptions.
[...] These questions do not have easy answers, in part because the issues raise inherent conflicts between the static efficiency gains associated with bundling and the dynamic efficiency losses associated with a lack of additional entry.
Their reasoning is correct. As Rothbard (1962, ch. 10) explained, a monopoly status is harmless without monopoly pricing. Folsom (1991) provided an excellent illustration: Rockefeller once dominated the oil industry because of increased efficiency and lower prices, which greatly benefitted the consumers. In the case of journal pricing, it would seem that the price is too high. What they omit to say, though, is that government subsidies distort the demand curve by reducing the price sensitivity of the buyer. As a result, packages that include lower-quality journals alongside top-tier ones now appear more appealing. And copyright laws put the final nail in the coffin.
5. A thought on private funding
The attack on the free market is getting old. The main theory is always a combination of the following reasons: asymmetric information, self-fulfilling prophecies, Gresham’s law, negative externalities, conflict of interest, free rider, huge fixed costs to entry, and many other reasons based on these same ideas. Another widespread idea is that some sectors of the economy are “exceptions” that do not comply with the rules of the market. This argument has been used to describe the historical market “failures” of water supply, education, banking system, credit rating agency, and so on. And this argument always failed.
We have similar arguments with respect to scholarly journals. Edwards & Shulenburger (2003) and Fyfe et al. (2017, p. 14) argued that academic publishing does not function as a free market because journals and books cannot be substituted by cheaper alternatives, which means that libraries and readers cannot choose between equivalent goods. Bergstrom et al. (2014) attribute the suboptimal big deal packages paid by libraries to asymmetric information. As explained above, these arguments fall flat when considering subsidies, OA mandates, and copyrights.
Because a free economy has hardly ever existed, it may be difficult to imagine how private agents achieve and manage the commons without government support. A thought experiment would help. Here’s what Bastiat (1850) concluded after observing the success of the private mutual-aid societies in France:
The natural danger that threatens such associations consists in the removal of the sense of responsibility. No individual can ever be relieved of responsibility for his own actions without incurring grave perils and difficulties for the future. [2] If the day should ever come when all our citizens say, “We shall assess ourselves in order to aid those who cannot work or cannot find work,” there would be reason to fear that man’s natural inclination toward idleness would assert itself, and that in short order the industrious would be made the dupes of the lazy. Mutual aid therefore implies mutual supervision, without which the benefit funds would soon be exhausted. This mutual supervision, which is for the association a guarantee of continued existence, and for each individual an assurance that he will not be victimized, is also the source of the moral influence it, as an institution, exercises. Thanks to it, drunkenness and debauchery are gradually disappearing, for by what right could a man claim help from the common fund when it could be proved that he had brought sickness and unemployment on himself through his own fault, by his own bad habits? This supervision restores the sense of responsibility that association, left to itself, would tend to relax.
[...] But, I ask, what will happen to the morality of the institution when its treasury is fed by taxes; when no one, except possibly some bureaucrat, finds it to his interest to defend the common fund; when every member, instead of making it his duty to prevent abuses, delights in encouraging them; when all mutual supervision has stopped, and malingering becomes merely a good trick played on the government? The government, to give it its just due, will be disposed to defend itself; but, no longer being able to count on private action, will have to resort to official action. It will appoint various agents, examiners, controllers, and inspectors. It will set up countless formalities as barriers between the workers’ claims and his relief payments. In a word, an admirable institution will, from its very inception, be turned into a branch of the police force.
References
Alasbali, T., Smith, M., Geffen, N., Trope, G. E., Flanagan, J. G., Jin, Y., & Buys, Y. M. (2009). Discrepancy between results and abstract conclusions in industry-vs nonindustry-funded studies comparing topical prostaglandins. American Journal of Ophthalmology, 147(1), 33–38.
Altwairgi, A. K., Booth, C. M., Hopman, W. M., & Baetz, T. D. (2012). Discordance between conclusions stated in the abstract and conclusions in the article: analysis of published randomized controlled trials of systemic therapy in lung cancer. Journal of clinical oncology, 30(28), 3552–3557.
Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454.
Bakker, M., Van Dijk, A., & Wicherts, J. M. (2012). The rules of the game called psychological science. Perspectives on Psychological Science, 7(6), 543–554.
Bergstrom, C. T., & Bergstrom, T. C. (2004). The costs and benefits of library site licenses to academic journals. Proceedings of the National Academy of Sciences, 101(3), 897–902.
Bergstrom, T. C., Courant, P. N., McAfee, R. P., & Williams, M. A. (2014). Evaluating big deal journal bundles. Proceedings of the National Academy of Sciences, 111(26), 9425–9430.
Björk, B. C., & Solomon, D. (2015). Article processing charges in OA journals: relationship between price and quality. Scientometrics, 103, 373–385.
Bohannon, J. (2013). Who’s afraid of peer review? Science, 342, 60–5.
Bornmann, L., Mutz, R., & Daniel, H. D. (2010). A reliability-generalization study of journal peer reviews: A multilevel meta-analysis of inter-rater reliability and its determinants. PloS One, 5(12), e14331.
Borrego, Á. (2023). Article processing charges for open access journal publishing: A review. Learned Publishing, 36(3), 359–378.
Boutron, I., Dutton, S., Ravaud, P., & Altman, D. G. (2010). Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA, 303(20), 2058–2064.
Boutron, I., & Ravaud, P. (2018). Misrepresentation and distortion of research in biomedical literature. Proceedings of the National Academy of Sciences, 115(11), 2613–2619.
Butler, L.-A., Matthias, L., Simard, M.-A., Mongeon, P., & Haustein, S. (2023). The oligopoly’s shift to open access: How the big five academic publishers profit from article processing charges. Quantitative Science Studies, 4(4), 778–799.
Chan, A. W., Hróbjartsson, A., Haahr, M. T., Gøtzsche, P. C., & Altman, D. G. (2004a). Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA, 291(20), 2457–2465.
Chan, A. W., Krleža-Jerić, K., Schmid, I., & Altman, D. G. (2004b). Outcome reporting bias in randomized trials funded by the Canadian Institutes of Health Research. CMAJ, 171(7), 735–740.
Claesen, A., Vanpaemel, W., Maerten, A. S., Verliefde, T., Tuerlinckx, F., & Heyman, T. (2023). Data sharing upon request and statistical consistency errors in psychology: A replication of Wicherts, Bakker and Molenaar (2011). Plos One, 18(4), e0284243.
Colavizza, G., Hrynaszkiewicz, I., Staden, I., Whitaker, K., & McGillivray, B. (2020). The citation advantage of linking publications to research data. PloS One, 15(4), e0230416.
Dudley, R. G. (2021). The changing landscape of open access publishing: Can open access publishing make the scholarly world more equitable and productive?. Journal of Librarianship and Scholarly Communication, 9(1), eP2345.
Duyx, B., Swaen, G. M., Urlings, M. J., Bouter, L. M., & Zeegers, M. P. (2019). The strong focus on positive results in abstracts may cause bias in systematic reviews: a case study on abstract reporting bias. Systematic Reviews, 8, 1–8.
Dwan, K., Altman, D. G., Clarke, M., Gamble, C., Higgins, J. P., Sterne, J. A., Williamson, P. R., & Kirkham, J. J. (2014). Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials. PLoS Medicine, 11(6), e1001666.
Edlin, A. S., & Rubinfeld, D. L. (2005). The bundling of academic journals. American Economic Review, 95(2), 441–446.
Edwards, R., & Shulenburger, D. (2003). The high cost of scholarly journals:(and what to do about it). Change: The Magazine of Higher Learning, 35(6), 10–19.
Estrada, C. A., Bloch, R. M., Antonacci, D., Basnight, L. L., Patel, S. R., Patel, S. C., & Wiese, W. (2000). Reporting and concordance of methodologic criteria between abstracts and articles in diagnostic test studies. Journal of General Internal Medicine, 15(3), 183–187.
Fanelli, D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US States Data. PloS One, 5(4), e10271.
Feng, Q., Mol, B. W., Ioannidis, J. P., & Li, W. (2024). Statistical significance and publication reporting bias in abstracts of reproductive medicine studies. Human Reproduction, 39(3), 548–558.
Frank, J., Foster, R., & Pagliari, C. (2023). Open access publishing–noble intention, flawed reality. Social Science & Medicine, 317, 115592.
Fyfe, A., Coate, K., Curry, S., Lawson, S., Moxham, N., & Røstvik, C. M. (2017). Untangling academic publishing: A history of the relationship between commercial interests, academic prestige and the circulation of research.
Heesen, R., & Bright, L. K. (2021). Is peer review a good idea?. The British Journal for the Philosophy of Science.
Jellison, S., Roberts, W., Bowers, A., Combs, T., Beaman, J., Wayant, C., & Vassar, M. (2020). Evaluation of spin in abstracts of papers in psychiatry and psychology journals. BMJ Evidence-Based Medicine, 25(5), 178–181.
Jones, C., Rulon, Z., Arthur, W., Ottwell, R., Checketts, J., Detweiler, B., Calder, M., Adil, A., Hartwell, M., Wright, D. N., & Vassar, M. (2021). Evaluation of spin in the abstracts of systematic reviews and meta-analyses related to the treatment of proximal humeral fractures. Journal of Shoulder and Elbow Surgery, 30(9), 2197–2205.
Kamel, S. A., & El-Sobky, T. A. (2023). Reporting quality of abstracts and inconsistencies with full text articles in pediatric orthopedic publications. Research Integrity and Peer Review, 8(1), 11.
Khoo, S. Y. S. (2019). Article processing charge hyperinflation and price insensitivity: An open access sequel to the serials crisis. Liber Quarterly, 29(1), 1–18.
Lakhotia, S. C. (2015). Predatory journals and academic pollution. Current Science, 108(8), 1407–1408.
Larivière V., & Sugimoto, C. R. (2018). Do authors comply when funders enforce open access to research? Nature, 562, 483–486
Lazarus, C., Haneef, R., Ravaud, P., & Boutron, I. (2015). Classification and prevalence of spin in abstracts of non-randomized studies evaluating an intervention. BMC Medical Research Methodology, 15(85), 1–8.
LeVeque, R. J. (2013). Top ten reasons to not share your code (and why you should anyway). Siam News, 46(3), 15.
Li, G., Abbade, L. P., Nwosu, I., Jin, Y., Leenus, A., Maaz, M., ... & Thabane, L. (2017). A scoping review of comparisons between abstracts and full reports in primary biomedical research. BMC Medical Research Methodology, 17(181), 1–12.
Mankiw, N. G. (2024). Principles of Economics. Cengage Learning.
Mathieu, S., Giraudeau, B., Soubrier, M., & Ravaud, P. (2012). Misleading abstract conclusions in randomized controlled trials in rheumatology: comparison of the abstract conclusions and the results section. Joint Bone Spine, 79(3), 262–267.
Mlinarić, A., Horvat, M., & Šupak Smolčić, V. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia Medica, 27(3), 447–452.
Mullard, A. (2021). Half of top cancer studies fail high-profile reproducibility effort. Nature, 600, 368–369.
Nabyonga-Orem, J., Asamani, J. A., Nyirenda, T., & Abimbola, S. (2020). Article processing charges are stalling the progress of African researchers: a call for urgent reforms. BMJ Global Health, 5(9), e003650.
Nascimento, D. P., Gonzalez, G. Z., Araujo, A. C., Moseley, A. M., Maher, C. G., & Costa, L. O. P. (2020). Eight in every 10 abstracts of low back pain systematic reviews presented spin and inconsistencies with the full text: an analysis of 66 systematic reviews. Journal of Orthopaedic & Sports Physical Therapy, 50(1), 17–23.
Nuijten, M. B., Borghuis, J., Veldkamp, C. L., Dominguez-Alvarez, L., Van Assen, M. A., & Wicherts, J. M. (2017). Journal data sharing policies and statistical reporting inconsistencies in psychology. Collabra: Psychology, 3(1), 31.
Pitkin, R. M., Branagan, M. A., & Burmeister, L. F. (1999). Accuracy of data in abstracts of published research articles. JAMA, 281(12), 1110–1111.
Piwowar, H. A., Day, R. S., & Fridsma, D. B. (2007). Sharing detailed research data is associated with increased citation rate. PloS One, 2(3), e308.
Olson, C. M., Rennie, D., Cook, D., Dickersin, K., Flanagin, A., Hogan, J. W., Zhu, Q., Reiling, J., & Pace, B. (2002). Publication bias in editorial decision making. JAMA, 287(21), 2825–2828.
Raju, N. V. (2013). How does UGC identify predatory journals?. Current Science, 104(11), 1461–1462.
Rose-Wiles, L. M. (2011). The high cost of science journals: A case study and discussion. Journal of Electronic Resources Librarianship, 23(3), 219–241.
Scherer, R. W., Meerpohl, J. J., Pfeifer, N., Schmucker, C., Schwarzer, G., & von Elm, E. (2018). Full publication of results initially presented in abstracts. Cochrane Database of Systematic Reviews, 11:R000005.
Seethapathy, G. S., Kumar, J. U. S., & Hareesha, A. S. (2016). India’s scientific publication in predatory journals: need for regulating quality of Indian science and education. Current Science, 111(11), 1759–1764.
Severin, A., Strinzel, M., Egger, M., Barros, T., Sokolov, A., Mouatt, J. V., & Müller, S. (2023). Relationship between journal impact factor and the thoroughness and helpfulness of peer reviews. PLoS Biology, 21(8), e3002238.
Shen, C., & Björk, B. C. (2015). ‘Predatory’ open access: a longitudinal study of article volumes and market characteristics. BMC Medicine, 13, 230.
Serra-Garcia, M., & Gneezy, U. (2021). Nonreplicable publications are cited more than replicable ones. Science Advances, 7(21), eabd1705.
Siler, K., & Frenken, K. (2020). The pricing of open access journals: Diverse niches and sources of value in academic publishing. Quantitative Science Studies, 1(1), 28–59.
Stefan, A. M., & Schönbrodt, F. D. (2023). Big little lies: A compendium and simulation of p-hacking strategies. Royal Society Open Science, 10(2), 220346.
Tedersoo, L., Küngas, R., Oras, E., Köster, K., Eenmaa, H., Leijen, Ä., Pedaste, M., Raju, M., Astapova, A., Lukner, H., Kogermann, K., & Sepp, T. (2021). Data sharing practices and data availability upon request differ across scientific disciplines. Scientific Data, 8, 192.
Van Noorden, R. (2013). The true cost of science publishing. Nature, 495, 426-429.
Van Rooyen, S., Godlee, F., Evans, S., Black, N., & Smith, R. (1999). Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ, 318(7175), 23-27.
Van Rooyen, S., Delamothe, T., & Evans, S. J. (2010). Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ, 341, c5729.
Walsh, E., Rooney, M., Appleby, L., & Wilkinson, G. (2000). Open peer review: a randomised controlled trial. The British Journal of Psychiatry, 176(1), 47–51.
Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PloS One, 6(11), e26828.