Last month I looked at the uses and limits of academic freedom in Australia, taking the Peter Ridd affair at James Cook University as a case study. And also at how last year’s French Review balanced two aspects of the work of universities: promoting free exchange as a way of exploring ideas, testing claims, exposing error and verifying knowledge; and promoting respect for others’ rights not to be harmed by (say) defamation or vilification.
Of course, the usual legal limits on free speech apply to academic freedom. But beyond this, the French Review saw risks with any formal rule against “lack of respect”. In its Model Code, the fact that someone may feel insulted or offended by a (lawfully expressed) view doesn’t justify restrictions. Otherwise, competing views on controversial topics won’t be aired openly enough to be properly examined.
This doesn’t make academic freedom a license for ad hominem attacks. By long tradition, universities are institutions designed to seek and spread and settle truth and knowledge reliably. And as seen in the first US presidential debate last month, personal attacks and name-calling add heat but shed no light on matters of substance.
In politics, such debates are closer to sporting events than to scholarly inquiry (just watch this clip from the West Wing “Game On” episode, back in 2002). Point-scoring inevitably assumes priority in election campaigns. But even here, points will be won with voters for addressing policy substance, not just for rhetorical skill at wrong-footing an opponent.
In scholarly debates a free exchange of views isn’t a free-for-all, or a win-at-all-costs struggle for dominance. Here the main aim is to expose and examine the substantive issues – clarifying the basis of each side’s main case, and where and why their standpoints differ. Framing the exchange as a form of combat, which only one party can win, invites defensive reasoning and may make the larger project self-defeating.
So, what debating norms can help university students engage in genuinely open and critical thinking in a class or a campus forum? In my model at Chart 1, the parties will disagree well if they aim high: cross-examining each contested claim with better logic and evidence, at Level 2. If they can do this well enough for long enough, either view – mainstream or minority – may be finally refuted at Level 1. (Or strongly reaffirmed; or sensibly reframed).
But, as my list of Level 3 tactics suggests, there are many ways to disagree badly; not just with what I’ll call “BadHom” or “what a bad person!” tactics.
At Level 3, the first two “avoidance” tactics seek to enlist support for a standpoint by appealing to a higher authority or greater good. (But, how far do we rely on one, or prioritise the other? With what caveats, in what context?) The next four seek to evade the point of an opponent’s view, mainly by raising other concerns. (But, how relevant and significant are these?) The final three seek to exclude a view from full and fair consideration, by casting doubt on the character or credibility of its proponent. (But, are they just being offensive or arguing in bad faith? Is their view simply untenable? Is their case not really an argument at all?)
Common avoidance tactics include “straw man” arguments (Misdirect 4). An opponent’s view is reframed in a way that gives it unintended meanings, often by way of loaded language. These depictions are then disputed, instead of the opponent’s actual claims. This makes it easy to shoot the messenger (BadHom 1-3) without taking any effort to make sense of their substantive message.
Another tactic is to introduce an “expert fact” as a trump card to prove a point. (Misdirect 2). But this may be a “red herring” that leads away from the original point (Misdirect 1). Or if directly relevant, it may not amount to firm evidence of the claim. It’s easy to conflate data/ information/ knowledge/ expertise (wisdom). The phrase “lies, damned lies and statistics“, popularised by Mark Twain in 1907, refers to the use of “expert” knowledge to prop up or put down a case, without proving anything conclusively.
The risks of “spin” multiply when the matter is complex, facts are few or open to interpretation, and when one side’s “expert” has scope to pre-select which facts “count” as reliable evidence. A 19th century British judge classed unreliable witnesses as “simple liars, damned liars, and experts“. His concern wasn’t that expert witnesses said things they knew to be untrue. It was their selective use of “emphasis” and their “highly cultivated faculty of evasion”.
Similar issues arise in public policy debates, where the basis for decisions is often said to be “policy-based evidence“. The “spin-doctor” art of cherry-picking convenient facts or quotes to shape or shift a narrative is familiar in professional politics. In an age awash with media, political parties and industry lobbies rely heavily on scene-setting and story-telling to win popular support. In Danish politics, “spindoktor” is a professional job description.
While spin-doctoring is a well-known feature of modern democracies, the use of rhetoric to persuade an audience is as old as Aristotle. The issue is whether it’s used to promote more informed deliberation, or instead to confuse or close down a discussion.
In modern university contexts, another Level 3 tactic is to take offence at the tone or terms of an opposing view, without addressing its substance (BadHom 1). This may seem more civil than simply calling someone an FBDZS (BadHom 3). But the “chilling effect” may be similar. By shifting off-topic to invoke rules of civility, it offers scope to censor or “cancel” the exchange without conceding any substantive point. From there, it’s a short step to leaving the matter unexamined, with neither party willing to spend time decoding the wrong-headed assumptions or misguided ideologies of “bigots and snowflakes“.
On controversial topics, many debates mix Level 2 and Level 3 ways of arguing. Some are “won” with Level 3 tactics alone. But focusing on tactical point-scoring and side-stepping, no matter how skilful, doesn’t lead to scholarly refutation. Instead, it often leaves core points of contention unexamined. Once a majority view seems settled on this basis, there’s not much space left for anyone’s “radical openness“. Minority views (or any disconfirming data they present) may be discounted or suppressed, to the point where they’re undiscussable.
In a university context, this is where the principle of academic freedom – and freedom of expression more generally – does its work. As one scholar observes: “popular or mainstream ideas generally need no protection”. As places of higher learning, universities assume responsibility for protecting free exchange and making room for viewpoint diversity, while also promoting the practice of scholarly refutation. This stance affords “heretic protection” to minority standpoints, while also exposing them to rigorous examination and counter-argument.
To illustrate my model, Chart 2 presents a sample of Level 3 defensive reasoning in response to my critique of OECD spending comparisons in 2016. (Controversially, the paper argued that Australian levels of public spending on higher education weren’t as low as claimed, due to wide reliance on GDP-based metrics to suggest that we were “33 out of 34 for public funding” and the like. It suggested that governments should not take such claims seriously; and that OECD metrics were a “cherry-picker’s picnic” for the sector’s funding advocates. Two colleagues took umbrage, as illustrated in Chart 2. After spending time explaining and apologising I realised that neither had addressed the paper’s main case directly.)
The model offers a rubric to help scholars and students recognise different ways of arguing, and the limitations of defensive reasoning. It can be applied to class discussions of controversial topics, such as how to address climate change, debates on immigration policy, or the risk and costs of government responses to the coronavirus (overlooked or overcooked?).
For example, the lecturer could ask a panel of student judges to observe the discussion of a contentious “hot topic”. (Many debates on climate change, for example, illustrate what has been called the “I’m Right and You’re an Idiot” approach. Here the aim is simply to discredit the other team.)
By naming the styles of arguing for and against each viewpoint, students could learn to identify how well argued each case was. The class could assess how often each side made strong points with logic and evidence; how often various Level 3 tactics were used; and how this affected debate quality in terms of clarifying issues, testing how valid claims were, and establishing which case seemed stronger.
The model offers scope to engage students as partners in action research, concerned with the practice of free inquiry as an intellectual discipline. It may also offer a basis for moderating debates on controversial topics that cause conflict or distress on campus. With polarised or high-conviction topics, students may turn to “BadHom” tactics more readily. This seems more likely when flaws in their substantive case are at risk of exposure; or when others persist with Level 3 “gaslighting” by disregarding substantive points that erode their own case.
Having used interactive surveys with students to assess course quality in past work, I’m interested in testing the model outlined here with other scholars, as a pedagogical tool. And in using it to examine case studies of scholarly conflict, where substantive questions become undiscussable. As outlined in last month’s post, this appears to have happened in the Peter Ridd case.
Part of the wider context for any such project is how the actors understand the role of universities, in modern democracies. In the Enlightenment tradition, academic freedom is a defining value and a legitimating concept for universities. As the University of Chicago has declared, this means providing its members the “broadest possible latitude to speak, write, listen, challenge, and learn” by supporting their freedom “to discuss any problem that presents itself”.
After all, if complex and controversial problems can’t be debated openly and critically in “enlightened” settings like these, then where?
Update, July 2022
In a recent webinar I presented an updated version of the model at Chart 1. The webinar was part of a Heterodox Academy funded project in 2022 on building viewpoint diversity in Australian universities. The latest version of the model can be seen in the July discussion paper.
Geoff Sharrock, 2012, Quality in teaching and learning: one path to improvement
Jamie Cameron, 2013, Giving and Taking Offence: Civility, Respect and Academic Freedom
bell hooks, March 2016, Speaking freely (video clip, 27 minutes)
James Hoggan, 22 February 2018, I’m Right and You’re an Idiot (ABC radio interview about Hoggan’s 2016 book, 17 minutes)
Rowan Atkinson, 15 August 2018, On free speech (video clip, 9 minutes)
Chris Gallavin, 21 September 2018, Some guidelines for civil discourse
Adrienne Stone, 15 October 2018, Four fundamental principles for upholding freedom of speech on campuses
Dominic O’Sullivan, 8 October 2019, There are differences between free speech, hate speech and academic freedom – and they matter
Hugh Breakey, 10 July 2020, Is cancel culture silencing open debate? There are risks to shutting down opinions we disagree with
Hugh Breakey, 24 August 2020, “That’s offensive, harmful and unhelpful” – The ethics of responding to arguments with allegations
Geoff Sharrock, 17 September 2020, Peter Ridd and the French Review connection
Hugh Breakey, 31 December 2020, Conspiracy theories on the right, cancel culture on the left: how political legitimacy came under threat in 2020
Since posting I’ve updated Charts 1 and 2. As flagged in earlier posts, my view of OECD data has been seen as heresy. The sample comments in Chart 2 are drawn from emails received from angry colleagues at the University of Melbourne in 2016. This followed a media misreport in The Australian newspaper on a journal article I’d published. In 2018 I published an update of the argument in the Australian Financial Review:
“How OECD data can misinform local university funding debates” 25 November 2018
“In its public spending on higher education, does Australia lag some 30 other OECD countries? Local reports have said so. In this narrative, the 2014 and 2016 editions of the OECD’s Education at a Glance ranked Australia “second-lowest in the OECD”. And in 2017 KPMG’s Julie Hare said that the OECD ranked us “among the bottom four countries at 0.7 per cent of GDP in its public investment in tertiary education, or about 40 per cent less than the OECD average of 1.1 per cent” while countries such as Portugal invested “far more”.
But OECD statistics are a cherry-picker’s picnic. We can’t properly compare our spending with Portugal’s by peering through the prism of a single slice of data. As the 2018 report confirms, our “bottom of the OECD” story is flawed. Consider how we fare in OECD metrics for total public spending on tertiary education. From 2010 to 2015 the Australian rate rose from 1.1 to 1.5 per cent of GDP, as the OECD average fell from 1.4 to 1.2 per cent. Portugal’s rate fell from 1.1 to 0.9 per cent. Below Portugal were Italy at 0.8, Greece, Hungary and Japan at 0.7, and Luxembourg at 0.5 per cent of GDP.
A Canberra spin-doctor could say that the latest official figures rank Australian public spending “seventh-highest in the OECD”. Confused? The fact is, OECD reports define “public” spending in more than one way. In their “tertiary education” dataset, government loans and allowances to students count as “public” spending. But local pundits prefer a different dataset, for spending on tertiary institutions from public and private sources. In these metrics (until this year’s report) Australian HELP loans were classed simply as “private” revenue. In 2015, our direct public grants to institutions amounted to 0.8 per cent of GDP (“eleventh-lowest”) against an OECD average of 1.0 per cent.
I’ll come back to how the OECD now presents “public” spending on tertiary institutions. But first, how do we fare overall in this dataset? From 2010 to 2015 our rate for total spending (from all sources) rose from 1.6 to 2.0 per cent of GDP. The OECD average rate fell from 1.7 to 1.5 per cent. Portugal’s rate fell from 1.5 to 1.3 per cent. Below Portugal were Greece, Italy, Hungary and others with rates of 1.0 per cent or less. Our spin-doctor could say that the OECD ranks Australia “fourth-highest in the OECD” for total tertiary spending. But as we know, in part this reflects our high share of offshore revenue from international enrolments. And in part, a domestic enrolment boom financed by uncapped government grants and loans.
Local knowledge aside, we must also consider that these OECD metrics track spending as a share of each country’s GDP. A booming economy will lower your rate. A major recession will lift it. From 2001 to 2015, Australian GDP grew by 50 per cent. But in Portugal, GDP grew by just 1 per cent. And Italy and Greece saw negative growth. The Euro Area average rate of growth was 14 per cent. How have faltering economies affected real tertiary spending? As the OECD’s 2018 report shows, most European tertiary sectors have had low growth. And in some cases (such as Italy, Spain and Portugal) negative growth. Over 2010-2015 real total spending on Australian tertiary institutions rose by 44 per cent while in Portugal it fell by 12 per cent.
Since we’ve had an enrolment boom, what about spending per student? In OECD estimates Australia spent $US20,300 per tertiary student in 2015 (in purchasing power parities). For Portugal the figure was $US11,800. Lowest in the OECD was Greece, at $US4100 per student. Clearly our “bottom of the OECD” story would not fly far in Europe. Its currency at home reflects a parochial history of funding laments, confirmation bias and cosmopolitan impressionism. As every commentator knows, HELP loans have enabled major investment in system growth. While most are repaid through taxation, their public cost is considerable. The myth that universities have been better funded in almost every other OECD country discounts what we know from domestic data.
Meanwhile, the OECD has acknowledged that its metrics for spending on tertiary institutions can under-state public investment in places like Australia. So its 2018 report now presents two types of “public” funding in the same table. Our rate for “initial” government spending at 1.3 per cent of GDP (loans included) sits alongside a “final” government spending rate of 0.8 per cent (loans excluded). The OECD average rates are 1.1 and 1.0 per cent respectively. For Portugal the figure was 0.7 in both cases. For our Canberra spin-doctor the OECD now ranks us both “sixth-highest” and “eleventh-lowest” for public spending. In reality, we’re somewhere in between.
Since 2015 I have argued that local accounts of OECD data under-state our public spending. In university circles this has provoked some allergic and Orwellian reactions. But heresy or not, the evidence remains: Australian public spending is not that bad, by OECD standards.”
My June 2020 and September 2020 updates on this critique provide charts with data from the OECD’s 2019 and 2020 reports. The June post includes a brief refutation of a Universities Australia counterclaim that appeared in The Conversation, in late 2019.