Last month I looked at the uses and limits of academic freedom in Australia, taking the Peter Ridd affair at James Cook University as a case study. And also at how last year’s French Review balanced two aspects of the work of universities: promoting free exchange as a way of exploring ideas, testing claims, exposing error and verifying knowledge; and promoting respect for others’ rights not to be harmed by (say) defamation or vilification.

Of course, the usual legal limits on free speech apply to academic freedom. But beyond this, the French Review saw risks with any formal rule against “lack of respect”. In its Model Code, the fact that someone may feel insulted or offended by a (lawfully expressed) view doesn’t justify restrictions. Otherwise, competing views on controversial topics won’t be aired openly enough to be properly examined.

This doesn’t make academic freedom a license for ad hominem attacks. By long tradition, universities are institutions designed to seek and settle truth and knowledge reliably. And as seen in the first US presidential debate last month, personal attacks and name-calling add heat but shed no light on matters of substance.

Source: https://www.latimes.com/politics/story/2020-09-29/trump-biden-first-presidential-debate-scorecard

In politics, such debates are closer to sporting events than to scholarly inquiry (just watch this clip from the West WingGame On” episode, back in 2002). Point-scoring inevitably assumes priority in election campaigns. But even here, points will be won with voters for addressing policy substance, not just for rhetorical skill at wrong-footing an opponent.

In scholarly debates a free exchange of views isn’t a free-for-all, or a win-at-all-costs struggle for dominance. Here the main aim is to expose and examine the substantive issues – clarifying the basis of each side’s main case, and where and why their views differ. Framing the exchange as a kind of combat between two forms of certainty can be self-defeating.

So, what debating norms can help university students engage in genuinely open and critical thinking in a class or a campus forum? In my model at Chart 1, the parties will disagree well if they aim high: cross-examining each contested claim with better logic and evidence, at Level 2. If they can do this well enough for long enough, either view – mainstream or minority – may be finally refuted at Level 1. (Or strongly reaffirmed; or sensibly reframed).

But, as my list of Level 3 tactics suggests, there are many ways to disagree badly; not just with what I’ll call “BadHom” or “what a bad person!” tactics.

Chart 1 To disagree well, aim high. (The pyramid idea draws on Paul Graham, see Readings)

At Level 3, the first two “avoidance” tactics seek to enlist support for a view by appealing to a higher authority or greater good. (But, how far do we rely on one, or prioritise the other? With what caveats, in what context?) The next four seek to evade the point of an opponent’s view, mainly by raising other concerns. (But, how relevant and significant are these?) The final three seek to exclude a view from full and fair consideration, by casting doubt on the character or credibility of its proponent. (But, are they just being offensive or arguing in bad faith? Is their view simply untenable? Is the case not an argument, but a rant?)

Common avoidance tactics include “straw man” arguments (Misdirect 4). An opponent’s view is reframed in a way that gives it unintended meanings. These depictions are then disputed, instead of the opponent’s actual claims. This makes it easy to shoot the messenger (BadHom 1-3) without taking any effort to make sense of their message.

Another tactic is to introduce a factoid that seems to prove the point (Misdirect 2). This may be a “red herring” that leads away from the original point (Misdirect 1). But even if directly relevant, it may not amount to firm evidence. It’s easy to conflate data/ information/ knowledge/ expertise (wisdom). The phrase “lies, damned lies and statistics“, popularised by Mark Twain in 1907, refers to the use of “expert” knowledge to prop up or put down a case, without proving anything conclusively.

The risks of “spin” multiply when the matter is complex, facts are few or open to interpretation, and when one side’s “expert” has scope to pre-select which facts “count” as reliable evidence. A 19th century British judge classed unreliable witnesses as “simple liars, damned liars, and experts“. His concern wasn’t that expert witnesses said things they knew to be untrue. It was their selective use of “emphasis” and their “highly cultivated faculty of evasion”.

Similar issues arise in public policy debates, where the basis for “evidence-based” decisions is often said to be “policy-based evidence“. The “spin-doctor” art of cherry-picking convenient facts or quotes to shape or shift a narrative is familiar in professional politics. In an age awash with media, political parties and industry lobbies rely heavily on scene-setting and story-telling to win popular support. In Danish politics, “spindoktor” is a professional job description.

Denmark prime minister Birgitte Nyborg, and her spin doctor Kasper Juul (right) in the 2010 drama series Borgen
https://www.bt.dk/film-og-tv/populaer-spindoktor-degraderes-i-borgen Image: Esben Salling

While spin-doctoring is a well-known feature of modern democracies, the use of rhetoric to persuade an audience is as old as Aristotle. The issue is whether it’s used to promote more informed deliberation, or instead to confuse or close down a discussion.

In modern university contexts, another Level 3 tactic is to take offence at the tone or terms of an opposing view, without addressing its substance (BadHom 1). This may seem more civil than simply calling someone an FBDZS (BadHom 3). But the “chilling effect” may be similar. By shifting off-topic to invoke rules of civility, it offers scope to censor or “cancel” the exchange without conceding any substantive point. From there, it’s a short step to leaving the matter unexamined, with neither party willing to spend time decoding the wrong-headed assumptions or misguided ideologies of “bigots and snowflakes“.

On controversial topics, many debates mix Level 2 and Level 3 ways of arguing. Some are “won” with Level 3 tactics alone. But focusing on tactical point-scoring and side-stepping, no matter how skilful, doesn’t lead to scholarly refutation. Instead, it often leaves core points of contention unexamined. Once a majority view seems settled on this basis, there’s not much space left for anyone’s “radical openness“. Minority views (or any disconfirming data they present) may be discounted or suppressed, to the point where they’re undiscussable.

In a university context, this is where the principle of academic freedom does its work. As one scholar observes: “popular or mainstream ideas generally need no protection”. As places of higher learning, universities assume responsibility for protecting free exchange and supporting viewpoint diversity, while also promoting the practice of scholarly refutation. This stance affords “heretic protection” to minority views, while also exposing them to serious counter-argument.

To illustrate my model, Chart 2 presents a sample of Level 3 responses to my critique of OECD spending comparisons in 2016. (Controversially, the paper argued that Australian levels of public spending on higher education weren’t as low as claimed, due to wide reliance on GDP-based metrics to suggest that we were “33 out of 34 for public funding” and the like. It suggested that governments should not take such claims seriously; and that OECD metrics were a “cherry-picker’s picnic” for the sector’s funding advocates. Two colleagues took umbrage, as illustrated in Chart 2. After spending time explaining and apologising for causing offence, I realised that neither had addressed the paper’s main case directly.)

Chart 2 Responses to the author’s paper, March 2016 (some are paraphrased, see Notes)

The model offers a rubric to help scholars and students recognise different ways of arguing, and their limitations. It can be applied to class discussions of controversial topics, such as how to address climate change, debates on immigration policy, or the risk and costs of government responses to the coronavirus (overlooked or overcooked?).

For example, the lecturer could ask a panel of student judges to observe the discussion of a contentious “hot topic”. (Many debates on climate change, for example, illustrate what one author calls the “I’m Right and You’re an Idiot” approach, where the aim is simply to discredit the other team.)

By naming the styles of arguing for and against each viewpoint, students could learn to identify how well argued each case was. The class could assess how often each side made strong points with logic and evidence; how often various Level 3 tactics were used; and how this affected debate quality in terms of clarifying issues, testing how valid claims were, and establishing which case seemed stronger.

The model offers scope to engage students as partners in action research, concerned with the practice of free inquiry as an intellectual discipline. It may also offer a basis for moderating debates on controversial topics that cause conflict or distress on campus. With polarised or high-conviction topics, students may turn to “BadHom” tactics more readily. This seems more likely when flaws in their substantive case are at risk of exposure; or when others persist with Level 3 “gaslighting” by disregarding substantive points that erode their own case.

Having used interactive surveys with students to assess course quality in past work, I’m interested in testing the model outlined here with other scholars, as a pedagogical tool. And in using it to examine case studies of scholarly conflict, where substantive questions become undiscussable. As outlined in last month’s post, this appears to have happened in the Peter Ridd case.

Part of the wider context for any such project is how the actors understand the role of universities, in modern democracies. In the Enlightenment tradition, academic freedom is a defining value and a legitimating concept for universities. As the University of Chicago has declared, this means providing its members the “broadest possible latitude to speak, write, listen, challenge, and learn” by supporting their freedom “to discuss any problem that presents itself”.

After all, if complex and controversial problems can’t be debated openly and critically in “enlightened” settings like these, then where?

Selected reading

bell hooks, 2006, Cultural criticism and transformation (video clip, 6 minutes)

Paul Graham, 2008, How to disagree

Geoffrey Boulton and Colin Lucas, 2008, What are universities for?

Geoff Sharrock, 2012, Quality in teaching and learning: one path to improvement

Jamie Cameron, 2013, Giving and Taking Offence: Civility, Respect and Academic Freedom

University of Chicago, 2014, Report of the Committee on Freedom of Expression

bell hooks, March 2016, Speaking freely (video clip, 27 minutes)

Katharine Gelber, October 2016, University changes to academic contracts are changing freedom of speech

James Hoggan, 22 February 2018, I’m Right and You’re an Idiot (ABC radio interview about Hoggan’s 2016 book, 17 minutes)

Rowan Atkinson, 15 August 2018, On free speech (video clip, 9 minutes)

Joellen Riley, Celine Boehm, David Scholsberg and Kirsten Andrews, 18 September 2018, Differing views: valuing disagreement (University of Sydney podcast, 65 minutes)

Chris Gallavin, 21 September 2018, Some guidelines for civil discourse

Adrienne Stone, 15 October 2018, Four fundamental principles for upholding freedom of speech on campuses

Jacqui Maley, Jordan Baker and Tim Elliott, 20 June 2019, Is there a free speech crisis in Australia’s universities?

Katharine Gelber, 24 June 2019, Dan Tehan wants a ‘model code’ on free speech at universities – what is it and do we need it?

Carolyn Evans, 11 February 2020, Freedom of speech on campuses should never be confused with undisciplined free-for-all rants

Geoff Sharrock, 17 September 2020, Peter Ridd and the French Review connection

Tim Flannery, 3 November 2020, Australia, the climate can’t wait for the next federal election. (See reader comments for examples of BadHom and other Level 3 tactics).

Notes

Since posting I’ve updated Charts 1 and 2. As flagged in earlier posts, my view of OECD data has been heresy in the Australian university sector. With minor paraphrasing, the sample comments in Chart 2 are from emails received from angry colleagues at the University of Melbourne in 2016. This followed a media misreport on a journal article I had published. To add context, excerpts from the media report, email responses and related events are provided below.

“Funding claims rely on data ‘misuse'” (By Julie Hare, The Australian, 9 March 2016)

“In an extraordinary attack, a University of Melbourne academic has accused current and former colleagues, including one of the most celebrated professors of higher education in the world, and the peak body Universities Australia of misusing OECD data to promote a false understanding of government funding of the higher education sector. Geoff Sharrock, a program director at the Melbourne Centre for the Study of Higher Education, claims in an article in the Journal of Higher Education Policy and Management that “commentators pluck simple metrics from the statistical tables in Education at a Glance” to “present flawed and misleading interpretations of OECD statistics”. These give the impression that the Australian higher education sector is chronically underfunded relative to nearly all other OECD countries … In the article, “Beautiful lies, damned statistics: Reframing Australian university funding”, Dr Sharrock singles out former University of Melbourne colleague Simon Marginson, citing 11 instances in which he claims the professor, who is now director of the University of London’s Institute of Education, used OECD data to claim a critical failure in federal government funding of higher education…”

Surprised, I emailed two MCSHE colleagues (and a similar note to others, such as Universities Australia’s Belinda Robinson) to alert them to the misreporting and provide a copy of the journal article. My first note was cc’d to my Institute director at MCSHE, and two University of Melbourne executives who were at the Universities Australia conference in Canberra that day. (I was in Melbourne, teaching in an all-day seminar. From those at the conference, I heard that my paper had been “much discussed!”.)

From: Geoff Sharrock
Sent: Wednesday, 9 March 2016 5:14 AM
To: Emmaline Bexley; Simon Marginson
Cc: Leo Christiaan Johannes Goedegebuure; Richard James; Glyn Davis
Subject: paper on OECD comparisons

Emmaline, Simon – Julie Hare’s report in today’s Australian overstates and overdramatises what my article is about. The paper expands on a Conversation article I published in October last year.

https://theconversation.com/oecd-comparisons-dont-prove-our-unis-are-underfunded-47412

Yes, it is designed to stir further debate about this, and to counter some common and flawed assumptions. But it does not use the term ‘misuse’ and for the most part is a technical analysis of how misinterpretations and misrepresentations arise. It argues that the OECD statistics most often cited – often to argue for better funding, as one would expect – are widely misunderstood, and are sometimes used selectively. It also argues that, once analysed in context, statements which rely on these metrics in isolation do appear to present a flawed picture of how Australian universities are financed. The article does not claim that Simon wilfully misconstrues OECD data, or that every comparative statement he has made over the years is problematic. But it is critical of the way the underfunding narrative has been framed through an OECD lens in some of Simon’s commentary, and the way other commentators have adopted this framing over time. It argues that failing to consider differences in GDP growth does affect one of Simon’s claims at least, of a $6b shortfall. It is also critical of an attempt to discredit a Grattan Institute report for its reliance on domestic data rather than OECD data. A copy of the article as published online last week is attached. There are some minor inaccuracies, to be fixed before the print edition is finalised. If anyone thinks I’ve got this all wrong, publishing a detailed counter-argument should help to inform future funding debates. Regards, Geoff

(The first reply added the journalist to my list of recipients. It reinforced Hare’s claims…)

From: Simon Marginson
Sent: Wednesday, 9 March 2016 7:42 AM
To: Geoff Sharrock
Cc: Leo Goedegebuure; Richard James; Glyn Davis; Sophia Arkoudis; Emmaline Bexley; Hare, Julie
Subject: Re: paper on OECD comparisons

I will refrain from a point by pit refutation. No doubt Geoff Sharrock would like to have a running debate for months about his paper, but there’s too much bile and bias in it for me to enter into a reasoned argument. I note the numerous instances of the use of selective quotation out of context—never a promising beginning to a discussion. The heading ‘beautiful lies’ is too close to abuse to pass without comment. I must register a mild objection at the use of this terminology by Geoff, which is unpleasant, unnecessary and inaccurate. I also mildly object to being singled out as his primary target, as apparently the principal advocate of a view about low public funding in Australia. This argument has been put repeatedly by many commentators, for many years, including (frequently) by the OECD itself. Why not have a chat about it to Barry McGaw, the Australian former head of the OECD Education programme? He’s on the record on this point.  Further and briefly:

  1. It takes a massive act of intellectual courage to deny the relevance of international comparisons in the present environment, but it is equally unwise. I see that kind of parochialism in the UK at times. It comes from UKIP and the Brexit side of the Tory Party.   Self-referential in a globalised world. I find that quite remarkable these days, and it has always puzzled me why Geoff has persisted with that line of thought. It just isn’t intellectually tenable. It is hard to defend any in-principle restriction on the kind of knowledge that contributes to responsible, intellectually coherent explanation.
  2. No one has ever denied that Australia has relatively high private funding, or that its total funding for higher education is at or above OECD average levels. This partly explains the apparent contradiction that Geoff claims to discover between the argument about low public funding, and the relative health and success of Australian universities.
  3. Private funding has different implications for facilities, spending and choices than does public funding, as every vice-chancellor knows. Much of what is earned in private funding is fed back into the extra costs of raising it. Increases in public funding can go straight to improvements. Market dependence is less happy for universities. Universities all over the world could tell Geoff this (but presumably he doesn’t believe the relevance of international comparisons, only comparisons with the Grattan Institute!)
  4. The fact that class sizes/staffing ratios have not improved while Australian higher education has been flourishing etc might cause Geoff to think. Position in the rankings is generated by sustaining and improving research but in a resource strapped environment something has to give. 
  5. If GDP goes up why shouldn’t expenditure on higher education also go up? It is widely agreed that the failure to use the additional resources generated by the resources boom, for education and infrastructure, was a major missed opportunity. Presumably Geoff sees no problem. He’s certainly not complaining. 

All the best to all Simon

(My initial response to these concerns – and similar objections raised by Dr Bexley – was conciliatory. But as the exchanges progressed that day, it seemed relevant to remind Dr Marginson that his own critiques of others’ views on funding policy also referred to “cherry-picking” data to support a case. The final response was as follows…)

From: Simon Marginson
Sent: Wednesday, 9 March 2016 9:08 PM
To: Geoff Sharrock
Cc: Leo Goedegebuure; Richard James; Glyn Davis; Sophia Arkoudis; Emmaline Bexley
Subject: Re: paper on OECD comparisons

Geoff I stand by my reasoned critique of the aforementioned Grattan report, from which you quote selectively. That critique did not include personal attacks on individuals, nor did it entail an implication of academic fraud. I have no obligation to continue to discuss your paper, or any other matter, with you. I would be grateful if I could ‘unsubscribe’ from this correspondence, to the extent that you are involved. Thanks Simon

(In the event no-one named in The Australian article responded with a critique of the article. A few days later, an email from the journal editor advised me that: “Simon Marginson has requested that in the event that you do not retract the article, that I include a statement disassociating the Journal from ‘Sharrock’s breach of ethics'”. My initial response was conciliatory. I suggested amending the online paper to clarify any points of concern, such as footnoting its reference to “lies, damned lies and statistics”; and adding an explanatory note with an apology for any distress caused. However, the response from the journal publisher in late March was as follows…)

“We’ve been advised by T&F Legal that given the potential for allegation of defamation, and the author’s concession of error, a Corrigendum is not sufficient to eliminate the risk of litigation in this case. We therefore recommend that: 1. The original article is removed from Taylor & Francis Online immediately, with the placeholder ‘Content withdrawn at the request of the author’; this is a matter of urgency given the coverage in the media (and social media); 2. We obtain written confirmation from Geoff that Simon Marginson finds the proposal to revise and repost a corrected version of the article to be acceptable (i.e., so we are assured there is no continuing threat of legal action)…”

(Since authors indemnify publishers, at this point it seemed best to get legal advice. I had asked Hare to publish my own outline of my paper’s case in The Australian, to correct her report. She declined: “If I publish your article, it implies my interpretation of your article was wrong. I don’t believe it was wrong. You might not like that interpretation but it’s valid…”.

The journal proved unwilling to provide any specifics of the complaint. As it didn’t seem reasonable that a complainant who refused to discuss a peer-reviewed paper could decide whether a journal was free to publish it, I didn’t accept the T&F advice. Instead, in the weeks that followed, I rechecked quotes and offered further explanation as to why my case seemed valid, despite the objections I’d seen. But there was no reply. The complaint was not withdrawn.

What happened in the ensuing months – until my University of Melbourne contract ended and my involvement with the Institute was terminated – is a longer story. In my next job at Monash University I learned that other proponents of our “bottom of the OECD” narrative also found my “lack of faith disturbing”. My view was a kind of university funding lobby thought crime, best not given oxygen by way of open discussion – in Canberra especially.

As noted in last month’s post on OECD metrics the Australian newspaper later confirmed that my work had been misreported. It published my response in August 2016. And as noted in a June post on this topic, the “Beautiful lies, damned statistics” paper was republished in another journal in 2017, with updated OECD data. In 2018 I published a further update in the Australian Financial Review (reproduced below). As expected, no-one responded with a counter-argument…

“How OECD data can misinform local university funding debates” 25 November 2018

“In its public spending on higher education, does Australia lag some 30 other OECD countries? Local reports have said so. In this narrative, the 2014 and 2016 editions of the OECD’s Education at a Glance ranked Australia “second-lowest in the OECD”. And in 2017 KPMG’s Julie Hare said that the OECD ranked us “among the bottom four countries at 0.7 per cent of GDP in its public investment in tertiary education, or about 40 per cent less than the OECD average of 1.1 per cent” while countries such as Portugal invested “far more”.

But OECD statistics are a cherry-picker’s picnic. We can’t properly compare our spending with Portugal’s by peering through the prism of a single slice of data. As the 2018 report confirms, our “bottom of the OECD” story is flawed. Consider how we fare in OECD metrics for total public spending on tertiary education. From 2010 to 2015 the Australian rate rose from 1.1 to 1.5 per cent of GDP, as the OECD average fell from 1.4 to 1.2 per cent. Portugal’s rate fell from 1.1 to 0.9 per cent. Below Portugal were Italy at 0.8, Greece, Hungary and Japan at 0.7, and Luxembourg at 0.5 per cent of GDP.

A Canberra spin-doctor could say that the latest official figures rank Australian public spending “seventh-highest in the OECD”. Confused? The fact is, OECD reports define “public” spending in more than one way. In their “tertiary education” dataset, government loans and allowances to students count as “public” spending. But local pundits prefer a different dataset, for spending on tertiary institutions from public and private sources. In these metrics (until this year’s report) Australian HELP loans were classed simply as “private” revenue. In 2015, our direct public grants to institutions amounted to 0.8 per cent of GDP (“eleventh-lowest”) against an OECD average of 1.0 per cent.

I’ll come back to how the OECD now presents “public” spending on tertiary institutions. But first, how do we fare overall in this dataset? From 2010 to 2015 our rate for total spending (from all sources) rose from 1.6 to 2.0 per cent of GDP. The OECD average rate fell from 1.7 to 1.5 per cent. Portugal’s rate fell from 1.5 to 1.3 per cent. Below Portugal were Greece, Italy, Hungary and others with rates of 1.0 per cent or less. Our spin-doctor could say that the OECD ranks Australia “fourth-highest in the OECD” for total tertiary spending. But as we know, in part this reflects our high share of offshore revenue from international enrolments. And in part, a domestic enrolment boom financed by uncapped government grants and loans.

Local knowledge aside, we must also consider that these OECD metrics track spending as a share of each country’s GDP. A booming economy will lower your rate. A major recession will lift it. From 2001 to 2015, Australian GDP grew by 50 per cent. But in Portugal, GDP grew by just 1 per cent. And Italy and Greece saw negative growth. The Euro Area average rate of growth was 14 per cent. How have faltering economies affected real tertiary spending? As the OECD’s 2018 report shows, most European tertiary sectors have had low growth. And in some cases (such as Italy, Spain and Portugal) negative growth. Over 2010-2015 real total spending on Australian tertiary institutions rose by 44 per cent while in Portugal it fell by 12 per cent.

Since we’ve had an enrolment boom, what about spending per student? In OECD estimates Australia spent $US20,300 per tertiary student in 2015 (in purchasing power parities). For Portugal the figure was $US11,800. Lowest in the OECD was Greece, at $US4100 per student. Clearly our “bottom of the OECD” story would not fly far in Europe. Its currency at home reflects a parochial history of funding laments, confirmation bias and cosmopolitan impressionism. As every commentator knows, HELP loans have enabled major investment in system growth. While most are repaid through taxation, their public cost is considerable. The myth that universities have been better funded in almost every other OECD country discounts what we know from domestic data.

Meanwhile, the OECD has acknowledged that its metrics for spending on tertiary institutions can under-state public investment in places like Australia. So its 2018 report now presents two types of “public” funding in the same table. Our rate for “initial” government spending at 1.3 per cent of GDP (loans included) sits alongside a “final” government spending rate of 0.8 per cent (loans excluded). The OECD average rates are 1.1 and 1.0 per cent respectively. For Portugal the figure was 0.7 in both cases. For our Canberra spin-doctor the OECD now ranks us both “sixth-highest” and “eleventh-lowest” for public spending. In reality, we’re somewhere in between.

Since 2015 I have argued that local accounts of OECD data under-state our public spending. In university circles this has provoked some allergic and Orwellian reactions. But heresy or not, the evidence remains: Australian public spending is not that bad, by OECD standards.”

Further reading

My June 2020 and September 2020 updates on this critique provide charts with data from the OECD’s 2019 and 2020 reports. The June post includes a brief refutation of a Universities Australia counterclaim that appeared in The Conversation, in late 2019.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s