A rambling essay for Todd

By Philip

What is the problem? Why is it different and why is it so difficult?

Consider the question, what is disinformation? It seems simple enough and surely it is foundational. If we can’t agree what actually is the problem, then how are we meant to respond to it? Presumably you have an answer? Most answers are something like: disinformation is false or misleading information given with the intent to deceive. The intent makes it different from misinformation. At this stage, we can assume that this sort of an answer is given for most experts i.e. it’s generally agreed across academic and professional practice.

But it’s nonsense. It falls apart given the slightest prodding and it’s hopeless in practice. First, there’s the intent part. The intent of the sender of information is almost never fully knowable. Even it is was, the idea that a piece of information is shared with the same intent by everyone is ridiculous. Some people may be malicious masterminds but others are just gullible. And even if intent were sensible and knowable, deciding what is clearly false and clearly misleading is (usually) really hard. In most cases, truthfulness is something that we decide collectively via some process (formal or informal… science, law, common sense) that, crucially, has to be agreed and accepted by its participants.

My point here is a simple one. How are we possible claiming that we can resolve a problem (even in part) if we can’t clearly and unproblematically articulate what the problem is?

That’s not the only point I want to make though. If we accept that disinformation is a problem with that collective process of deciding what is true, important, justified etc – that is, if it’s not a problem specific to information itself, then this has serious implications for how we think about it as a problem. In short, it means that we can’t treat disinformation as though it is a problem within our society (which, by the way, is absolutely how we are treating it, one which can be isolated and treated), and instead we have to recognise that is a problem of our society – the logical product of other things we are doing to collectively organise and make sense of things.

That makes it a really hard problem to understand and to address. It means that you have to ask massive questions of society at this time just to begin to make sense of it. My complaint – the one I’m making over and over again – is simply that we’re not doing that. What we’re doing is asking the simple, small questions that we’re used to asking within our domains of expertise and pretending that this information problem is something that suits that practice. It’s not.

I know that seems like a really pseudish, academic point but it explains why we’re so flailing in our response to the problem. Imagine, for a moment, that you’re in the position of providing disinformation consultancy services to someone involved in the renewables rollout. You do what consultants do. You go away and try to read and synthesise the research in this area – looking for practical responses that your client can make to limit the influence of informational uncertainty.

You’ll find pretty quickly that the only empirical claims being made about this problem come from two areas of research. There are a bunch of psychologists telling you that you can treat information like a virus and inoculate against by carefully seeding some kind of informational vaccine, or by pre-bunking your audience with truth bombs, or by treating the problem with hard facts… and it’s all total crap. Even if those techniques did work in the lab (and the evidence is highly mixed, at best), they are absolutely 100% impossible to implement in practice.

The second bunch are the technologists and the computer scientists, who are currently pushing a bunch of war-game, AI-driven responses which are a) fantastical and b) wholly in opposition (academically and morally) to any sort of liberal democratic norms.

If you can be bothered, maybe you extend your reading to include media theorists and philosophers – people like me – who will tell you that disinformation is a logical outcome in a society that has atomised through capitalism and is media saturated to the point that individual attention is the only informational value and there is no collective reality. Which may be true, but leaves you telling your client that to limit misinformation you somehow have to reengineer society away from its dominant modes of power and politics.

I’m taking a long time to make it but my point, basically, is that you cannot sell disinformation services based on how we (the people I know, work with, read etc) are “working” on the problem. We’re doing a bad job. Similarly, a citizen’s assembly cannot deliver a disinformation strategy based on our work – it’s not a problem ready for deliberation – you’d need alchemy.

So, where does that leave us?

First, here’s my attempt at naming the problem quickly and simply. We are dealing with the aftermath of the end of the enlightenment. A significant period of Western history has ended and science, academia, liberal politics and public corporations are no longer dominant powers in our collective organisation and sense-making. Disinformation is just the label we’re giving to a collective fuck you to these liberal institutions – and the massive amounts of grift and opportunism that has blossomed on that dung heap. It’s a rejection of truth but far more significantly it’s a reconfiguration of the processes via which we decide truth. It is the culture war – it’s happening because enough people have recognised that they can get what they want by acting outside the old rules for assigning value and consequence.

What does “a reconfiguration of the processes” mean? To make sense of this, you have to think of the world not as a thing but as an event, or rather a sequence of events – a timeline. Information is how we make sense of our place in that timeline: it’s how we decide what is real. We communicate to decide what has happened (A), what is happening (B), and what might happen as a consequence (C). The really seductive thing about the idea of truth is that is promises that A and B are universal and that A + B => C.

To live collectively – to occupy a shared sense of reality and to act accordingly – a group must have some way of recognising B, accepting A and planning for C. In other words, sociality relies on the shared recognition of things happening (and their meaning) in the present, which is always negotiated in relation to a largely accepted story about the past, and done with some sense of shared anticipation of the future. It’s the organisation of events into a logical sequence that most people accept as logical. It’s an act of communication – and we have various norms, methods, laws, institutions to facilitate that logical ordering.

That explains why this is a problem now. Trump is massively significant here. Obviously, those orders had long been weakened but he comes along and demonstrates to a massive audience that you can crudely and outright defy them. Basically, he reveals that truth is fallible, especially if you are rich, white and shameless. He creates a permission structure for masses of people to act the same way and this supercharges this alternative information order – it defines one side in the culture war. One of our problems is that we still haven’t fully appreciated that we are the other side, whether we like it or not. If you want to fight disinformation, then you are fighting a culture war, no matter how hard you pretend otherwise.

So what are we faced with? A problem that is deeply engrained in the dominant logical struggle of our time. That has myriad causes, social, political, economic, technical… and that is producing an overarching cultural struggle that situates our interactions at all levels of social experience. The only interventions that are really going to affect it are massive, unwieldy and authoritarian – you could massively restrict speech, for example, or massively limit how much time people spend online, collectivise media or regulate social media out of existence. None of this is going to happen.

If it’s almost impossible to deal with this as a macro-social problem, it’s equally difficult to deal with it as an individual, psychological problem. Are we seriously suggesting that people have suddenly developed or evolved this predilection for disinformation? Or that we can change the course of this evolution with some carefully targeted interventions? Of course not, there is no human nature here, so sudden and inconvenient change in the essential nature of humans, only a confluence of circumstances that makes it suddenly and opportunistically valuable to say what you want and to believe what you want. You can’t educate or nudge people away from disinformation. Step back and think about the logic behind these ideas – ask again, why is this a problem now?

My view is that there are a handful of really important factors. One is technology and the catalytic influence it has had on some of the worse impulses of late-age capitalism. Another is the weakening and failure of the “good” informational orders – there are lots of reasons why democracy is struggling, plenty are internal. The third is cultural and this really is a struggle over abstract values, you have to be clear what you stand for and why you stand for it. We have to be able to understand these fundamental factors and, more crucially, we have to understand how they interact with each other. We’ve hardly begun this task.

Jesus, this is all background to the main point I was going to make. What do I propose? What would I say if I were trying to sell disinformation consultancy.

  1. You have to recognise the complexity of the problem and have some capacity to understand it. You have to be able to ask the right questions of the context you’re operating in. Contexts are really complex and they overlap. People in the Wimmera objecting to wind farms are still within the context of Trump. Describing and mapping these informational contexts is a key piece of work that nobody has done.
  2. Then, you have a choice of strategies. Either you fight the culture war or you attempt to protect yourself from it. What’s the difference? In the first case, you’re going to care about what people think and try to change their minds. In the second, you’re going to try to insulate a process from them regardless of what they think. The latter strategy may not be very democratic but it could be socially good, given the right context and design. It could be deliberative. In both cases, I think the only feasible strategies are hyper-local and there’s obvious opportunity in being the designer and facilitator of those processes.
  3. Finally, I think there’s massive strategic (and possibly commercial) value in being able to name the problem and explain it’s complexity better than anyone else. At the moment, some people can sell bad solutions because most people don’t know what to do and will jump on any bandwagon that looks viable. During this period, it is valuable to point out flawed or illogical solutions, especially if they could exacerbate a situation (I’m looking at you AI-chat bots).

How does this feed into the resilient communities working group and our plans for February?

My initial thinking was this: I have no solutions at the larger scale of intervention but if we could convene a hyper-local group within a room – a school community, perhaps, or a distinct geographical community (e.g. farmers plus a regional centre), then perhaps it is possible to devise a contextual strategy that protects that community from national and international struggles. That strategy, most likely, would involve a recommitment to hyperlocal deliberation plus some preliminary information about how overarching contexts are interacting and framing disinformation concerns. It might produce a local charter or commitment to truth making and appoint some representatives to oversee/facilitate that process.

Even that feels a little speculative or premature. Those principles are vague and generalised, so perhaps nothing should proceed until we have made a proper effort to understand the informational dynamics within that context… hence the suggestion that we convene a group simply to explore their experiences, combined with some sort of media environment analysis.

Now, I’m not sure. If you can suggest a strategy that sounds viable, I’d probably jump on it. The issue remains – who has such a strategy?

Share: X (Twitter) Facebook LinkedIn