So it’s been a somewhat gruelling couple of weeks, getting everything together for the two methods courses I’m covering this term. The undergraduate course, since it’s new and we were designing it from scratch, took the lion’s share of the time – although it was nice to be able to work alongside the ultra-competent (and ubiquitously sardonic) L Magee in pulling everything together.
The course is large enough, and massive lecture halls scarce enough, that we have to deliver two different iterations of the lecture – so we had one go at the lecture Monday morning, and will get another try on Friday. Most of the first lecture, necessarily, dealt with housekeeping and course mechanics. I’m currently trying to gather my thoughts from the lecture on Monday – I always learn something about the tacit logic of my own stuff, when I present it – there were elements of the tacit logic underlying the structure of the lecture, elements of grasping why I wanted to organise my bits the way I did, that I sort of “got” only when listening to myself give the lecture… We’ll see if this improves the reiteration of the lecture to be delivered on Friday.
I thought I’d post a few notes here on the lecture and course concepts – with the caveat that I’m always a bit cringe-y when I expose pedagogical material publicly. There’s a strong exhortative dimension to teaching – things get simplified, not simply or straightforwardly to make them easier to learn, but with the goal of trying to rouse something, trying to pass along a certain contagion about why this stuff can actually be exciting (er… realising that what I’m about to write may not… er… have that effect on terribly many people – I don’t claim to be a rousing lecturer – quite the contrary (really strongly the contrary) – my skills lie much more in leading discussions – but there is still an element, in lecturing, of wanting to communicate affect, and not simply content, of wanting to share, somehow, that the very abstract sort of material that I generally teach, can be deeply meaningful in its strange way – and something about what I do, to try to communicate this, never seems to me to translate well when I write about what I did – instead, what comes through is the simplicity of the content, all the ways I would qualify it, all the ways I disagree with it… And yet… There’s a reason that stuff gets left out of the lecture in the first place… A reason that doesn’t prevent me from being fairly self-conscious about reproducing lecture concepts outside the shelter of the lecture hall…)
The course is titled “Social Research: Qualitative”, and the structure of classes here gives you twelve weeks to somehow meet whatever expectations such a title engenders. Last year, the staff member who took this course decided that twelve weeks simply wasn’t enough to give a meaningful introduction to something as broad as “social research” and so decided to drill closely down into one research method (discourse analysis), to try to give the students some in-depth experience with mastering a particular method, from which they could hopefully extrapolate when orienting themselves to other methods in the future. In earlier years, the course has been taught with a heavy NVivo focus, with all the students doing the same research project – again with the thought that students could extrapolate their experience with that project, into other sorts of research they might conduct in the future.
We’re equally dismayed, I suspect, by the jarring disjoint between the expansive course title, and what can reasonably be covered in twelve weeks in an introductory methods class for second-year undergraduates. Something – lots and lots of somethings – have to “go”, to make the course possible.
We’ve channelled our dismay, however, in a slightly different direction: while it is certainly true that any individual student is going to do some specific form of research project, and neglect others, we’ve decided not to pre-dictate either the method or the project itself. There will no doubt be plenty of project-deflection over the term, as students choose topics that are too vast – or illegal, dangerous, or inappropriate in other ways… ;-P But in principle we’ve left it to the students what they want to study, substantively, and how they intend to study it. Although we will do a bit with “method” in the “to do” list sense in this course, we’ve decided instead to focus on the most basic elements of the research design process – becoming curious about something, asking a question, looking around to see whether anyone else has ever asked something similar, trying to figure out what you need to do, to answer the question you’ve asked, and then being accountable for your question, what you’ve done to answer it, and the answer you’ve put forward, in a public sphere.
This approach means that we can’t dictate method, because we’re telling the students in a very strong way that their method has to derive in some quasi-logical way from their question. And we can’t dictate question because… well… we’re telling students that research is about straddling that strange space between personal curiosity and public accountability – and it’s a bit out of place to tell other people what they ought to be curious about… ;-P
So we bookended this first lecture with two videos, designed to mark out two possible extremes in conceptions of social research. After some brief transitional comments, we opened with the first six minutes of this video of the Milgram experiments:
What the students saw, was a man in a white lab coat take an authoritative role in a highly artificial experimental setting, where the stated purpose of the exercise was to test a hypothesis in carefully controlled circumstances. I did warn the students there was more to this experiment that met the eye (we’ll return to this video again later in the course) – but the parting image they were left with was of what looked to be a research subject with a heart condition, strapped to a chair, awaiting progressively nastier electric shocks if he failed in a memorisation task… (They laughed… Hmmm… I responded by telling them we would trial this method in their tutorials…)
So this is one extreme – not, in this case, for the distressing nature of the experiment, but for the highly artificial, controlled, hypothesis-testing orientation of the study. The video with which we ended the session was this one, on Sudhir Venkatesh’s anthropological work on a Chicago gang (the embedded video below is only an excerpt – the full program is here):
Venkatesh’s piece was chosen for a sort of maximal contrast to the fragment of the Milgram video that we showed: a research scenario in which the field strikes back, takes its researcher captive in the most literal possible sense, rejects the researcher’s “expert” knowledge, and tells the researcher how to conduct the (radically uncontrolled) study.
We will do other things with these and other video materials through the course but, for purposes of this introductory lecture, the point was simply to mark out two extreme points, suggest that there is a continuum of possibilities between them – and that all of this, the whole continuum, could be defended as some form of “social research”. A continuum of social research along which the students would have some opportunity to begin situating themselves in the course of the term.
In terms of other content, this blurb from the course guide gives the gist of how we are approaching the course:
Many people, when they think about research, think of something done in a special sort of place, like a laboratory, a library, or a “field site”, by a special sort of person, like an academic expert who has spent years acquiring a vast specialist knowledge of what they are studying, and on a special sort of topic, which is important enough to count as a “research question”. Thought of this way, research can seem a bit intimidating and removed from our other concerns: we can struggle to think of ourselves as being the kind of people who might do research – surely we aren’t qualified or we don’t “know enough”? We can struggle to imagine what research might look like, if carried out in the sorts of settings where we spend our personal and professional time – surely research doesn’t tackle the sorts of experiences we have in our everyday lives? We can doubt whether our questions and concerns are “important” enough to count as research questions – surely research investigates something more removed from our everyday experiences or personal passions?
While it is very common to think of research as this kind of specialised, rarefied expert activity, this image of research is highly misleading. Research, at its most basic, involves cultivating the very opposite of expertise: it entails a process of opening ourselves to what we don’t know – of taking seriously our own curiosity and desire to learn more – of asking questions. Because research crystallises around a question, the research process is driven precisely by our lack of expertise – by what we need to learn. In the research process, we all position ourselves as explorers and investigators, rather than as people who already possess some kind of mastery over our subject matter. Because research operates in this space of exploration and uncertainty – because it takes the form of quest to learn something new – it is impossible to have all the skills and knowledge you will need for the research process, before you undertake the research itself.
While some of us may have a bit more practical experience with research than others, all of us have some experience with the core skills required for the research process: we have all been curious, asked questions, set about finding answers, and debated with other people about each step in this process. On one level, then, we are all “researchers” in at least an informal sense. At the same time, no specific research project – formal or informal – begins with a special creature called a “researcher” who already possesses all the skills and knowledge required to carry out a research project, before they start asking questions and working out how to answer them. Researchers are created, not born. And what creates them is nothing more than the process of actually doing research. You become a researcher: you do this by carrying out research. All the skills that research requires, and all the things you need to know to do research successfully, are learned through the research process itself.
This doesn’t mean that formal study is not essential to the research process: it is. It means that this formal study more closely resembles the process of apprenticing in a craft, than it does the process of committing to memory some fixed body of information. Research is a practical activity – an art, albeit one undertaken with a scientific spirit. Every question, every method, every researcher brings something subtly different to the research process – meaning that research is never learned abstractly, as a skill that could be pursued separately from its various practical applications. Instead, your research question is what drives your formal study, providing a meaningful context within which you can work out what sorts of formal knowledge and skills you need to have, why you need to have them, and how you can learn them most efficiently. Your research question therefore grounds other sorts of study you undertake – which is why we will start this course, not with an abstract set of knowledge or skills we think you need to memorise, but with activities that will help you work out a research question that can organise the rest of your work in this course.
From this starting point, we will then guide you as you undertake a quick apprenticeship in the major stages of the research process. There are of course many different types of research. The research carried out by a journalist, an activist, a market researcher, a government, or an academic researcher will differ in significant ways, for example, due to the different end goals and audiences for the research. Nevertheless, certain elements are common to any sort of research process. Those common elements will provide the focus of our work in this course.
The main interest in life and work is to become someone else that you were not in the beginning. If you knew when you began a book what you would say at the end, do you think that you would have the courage to write it? What is true for writing and for a love relationship is true also for life. The game is worthwhile insofar as we don’t know what will be the end.
~ Michel Foucault
(1982) “Truth, Power, Self: An Interview”, in L.H. Martin (1988) Technologies of the Self: A Seminar with Michel Foucault, London: Tavistock, p. 9-15.
What is this strange thing about writing that requires courage? Where is the risk? Why is this task so fraught?
“It’s the problem with reading so many primary sources,” L Magee suggests the other day, when we discuss this issue, “You think you have to be that good.”
I mention that I am relatively good with situational pieces – the context is known, and bounded. It’s developing the boundaries that is difficult for me – deciding when it’s okay to stop. LM shares this worry: “I say to myself, how can I possibly write on this, when I haven’t read…” I wince, as LM manages to list some works I also don’t know – I feel the boundaries pushing farther back. Involuntarily, I remember ZaPaper discussing how research is fractal: no matter how much you drill down, things never seem to become less complex – if you don’t rein things in, ZaPaper argues, “One ends up investigating everything and writing nothing”.
In my conversation with LM, I change the topic quickly to get my mind off of all the works we have convinced one another we must read (I’m actually embarrassed to list the things LM and I are planning to read together this term – embarrassed because it’s simply absurd, the number of works – the number of fields – we are frantically trying to cover, in our quest to feel vaguely adequate to the problems we are posing. I’m reminded of Scott Eric Kaufman’s search for complete world knowledge – I think that’s a fairly good description of what we’re telling one another we’ll manage to cover in the next six months…).
I offer that I do better when I have a specific audience in mind, when I have some idea what concepts are shared, and what concepts need to be developed and explained in detail. “Write for me, then,” LM volunteers, “Let me be your audience – then you’ll know to keep things simple, break things down.”
LM is being modest – as if I haven’t received the most thorough criticism of my work in our conversations – I hardly need to be simple in our discussions.
The issue, though, isn’t really audience, or situation – or even background – these are all deflections from the core challenge, which concerns the question or problem. Writing begins in earnest for me when I’ve decided what the core problem will be. Knowing the audience or the situation makes this easier, because the universe of possible problems that interest me can be narrowed to the much smaller set of problems that jointly interest me and specific interlocutors, or that intersect with some specific situation. But the core issue is still defining the problem.
At the moment, I’m balancing across a few core problems, and have been writing at a level of abstraction high enough that I could keep all of these problems suspended at once. This was useful, very useful, for a period. But now I need to move back to something ever so slightly more concrete (realising that this term only ever applies in a slightly ironic way to my work), which will force me to leave some of these problems to the side for a time. As a step in this direction, over the next couple of months LM and I will be working on a proto-collaborative project from time to time, starting with a set of reflections on The Positivist Dispute in German Sociology, and tentatively organised around the question “Is There a Logic of the Social Sciences?”
Ironically, this topic picks up on the very earliest theoretical question I addressed on the blog: whether it is viable or productive to seek to understand the emergence of the social sciences, and the relationship between the social and the natural sciences, with reference to some kind of strong ontological distinction between forms of human practice, or the properties of social and natural worlds as objects of knowledge. When I first addressed this issue here, I contested the validity of this kind of theoretical move, but left (as an exercise for the writer… ;-P) what a developed alternative might look like. We’ll see whether this collaborative dialogue allows me to pick up on some of these issues in a more adequate way – and how the question comes to be refracted when translated into a more interactive exchange.
I should note by way of apology that I pulled an unintentional bait-and-switch to get LM on board with this vision of a collaborative project. We’ve been talking about doing some form of collaborative writing for some time, but have both been too busy to undertake anything more involved than what we’ve attempted from time to time on the blog. Now that our schedules are lightening a bit, we returned to the issue of collaborative writing with a more serious intent. I suggested an upcoming (low key) conference, LM suggested something around The Positivist Dispute, and I proposed that perhaps we could look into the competing meanings of “the critical tradition”, as this concept was central to this debate. All well and good, and so we shared dinner and a nice conversation around what we might write, and then, just when all seemed settled and we were wandering into the subway station to go home, I was suddenly hit with the concept and burst out, “You know of course what we could do instead? We could also look at the whole notion of the logic of the social sciences – maybe title the presentation is there a logic of the social sciences?”
LM blanched, and reminded me that I had recently been lamenting that, when I present, people tell me I am… er… scary: did I really think, LM wanted to know, that presenting on this particular question would assist me in overcoming that perception? I found myself rationalising – oh, it won’t be that big of a deal – no one will show for the presentation, really, because the topic is just too abstruse – if people do show, it’ll just seem like a discussin of a dead debate, etc. LM seemed sceptical, and began to list people that would be likely to attend. I suspect I’m too tempted by the topic, by the problem, to let other concerns get in the way… This reaction no doubt has something to do with what tends to happen when I present… So here we are – at least for the moment – having decided to open a discussion on the blog, and then see what develops from here that we might (or might not) turn into a presentation in a couple of months.
Note that we haven’t settled on any particular order or schedule for posts. I’ll try to write something over the weekend to get things started – most likely focussing solely on Popper and Adorno’s original contributions to the debate, and exploring how the competing notions of critique yield different concepts of the social sciences. We don’t have any specific plans for what will fall out of this discussion – whether it might yield some kind of joint presentation, duelling presentations from competing stances, or a decision that the topic isn’t productive for what we each want to write at the moment – these decisions will emerge over time. Hopefully we’ll both find it productive for our current writing, not knowing how all of this will end…
So I haven’t written much substantive lately – and this post unfortunately won’t break that trend. ;-P Prosaic work responsibilities are bearing down on me and, for at least the next several weeks, I simply won’t have time to dig in to serious questions. Which is frustrating, because I feel at the moment like I’m absolutely seething with ideas that are searching for expression and form. And writing – structured, sustained, in-depth writing, rather than the sorts of scattershot sketches I can dash off in between other things – is the only way I know to show myself what I’m thinking – to discover what force, if any, these still-inchoate ideas might possess… Read more of this post
LMagee and I have had occasional conversations this past year on the ways in which the interdisciplinary transmission of ideas takes place. One recurrent theme in these conversations has been the issue of time lag – how concepts and works from outside one’s core discipline or sub-discipline are so often appropriated in the form they occupied decades ago, with little appreciation for how subsequent specialist discussion might have transformed a tradition – whether enabling a tradition to address pivotal early critiques, or causing a tradition to be rejected in spite of its early promise. Another recurrent theme has been the issue of marginality – how texts and concepts can sometimes come to have interdisciplinary resonance, and even – in the minds of non-specialists – come to signify a discipline, when that discipline’s own practitioners might regard those texts or concepts as dubious, marginal, dated, or mundane statements of the obvious.
The fact that a disciplinary discussion “moves on” – that specialists are no longer so taken (or may never have been taken) by specific works as are those of us looking into the discipline from the outside – is not automatically grounds for rejecting an interdisciplinary appropriation. It may in fact be that a work is simply more valuable for the thoughts it sparks outside its home ground, that specialists have become jaded through familiarity, that the influence of a foundational work has come to be so taken for granted that its novelty and importance are no longer recognised within its own field – or that, as Sinthome has suggested, pressures driving toward novelty in academic production have created a cottage critical industry that, for all its volume and detail, takes nothing away from the overarching brilliance of an earlier text.
Being unaware of these broader specialist debates becomes more of a problem for interdisciplinary work, however, when people succumb to the temptation, not only to be inspired by a work from another discipline, but to steal some of the aura of that discipline to add a kind of nonconceptual force to their re-presentation of a borrowed idea. LM and I have recently been discussing some examples of this in relation to social science appropriations of quantum mechanics and set theory in particular, where occasional authors have quite selectively appropriated very specific interpretations of highly contested issues within a complex specialist discussion, and presented these appropriations to nonspecialists as “discoveries” – as established and firm bits of factual knowledge or analytical technique. These kinds of “auratic” interdisciplinary appropriations often strike me as attempts to raise the prestige of a claim by exoticising it, removing it from the everyday experience of intended readers and interlocutors, and effectively placing the claim within a black box of inherited authority, in which position it is shielded from critique…
As someone quite committed to interdisciplinary work, I always find myself a bit frightened by the risk of “auratic” appropriations: I don’t think such appropriations are always intentional, or are consistently recognised for what they are, and I want very much to avoid falling into this practice. This is why I so often emphasise the metaphoric nature of concepts I appropriate from other fields, and try to remain tentative and agnostic about extrapolating the significance of empirical work from distantly-related disciplines, assuming that, as in those more familiar disciplines closer to home, exotic fields will also have their intractable debates, their unaccountable fads, and their creative interpretive frameworks that are massively underdetermined by the evidence… Like any tourist, the interdisciplinary researcher needs to take special care not to overlook potential dangers whose existence would loom large to a disciplinary native… At the same time, interdisciplinary travels are the only way that certain kinds of questions can be answered – often, in fact, the only way that certain kinds of questions can be perceived. Fear of what might go wrong therefore must not undermine our willingness to undertake interdiscilinary work. The question becomes, not whether to conduct interdisciplinary work, but how to do so at a high level.
All of this is a very long prolegomenon to mentioning that I am currently reading Manuel DeLanda’s A Thousand Years of Nonlinear History – which Russ suggested to me some time back, and which I really ought to have read long ago, given that it is an attempt, like my own work, to reason through the philosophical implications of historical experience within a materialist framework. DeLanda’s materialism is of the expansive form associated with the Annales School – seeking to embed human history within a much broader and subtler field of material life than most other “materialist” approaches. DeLanda draws on a very wide range of scientific and social scientific disciplines – mined particularly, I gather, for their insights into potentials for spontaneous self-organisation and “emergence” – as inspiration for his philosophical work, which attempts to understand the implications of complex and nonlinear trajectories he regards as characteristic of material systems and of human history.
I’m too early in the text to comment meaningfully, but am fascinated by the ambition and scope of the work – and am also enjoying reading an author who attempts to dig deeply into the relationships between philosophical concepts and historical experience. I am also particularly interested in how the work navigates the interdisciplinary minefield I mentioned above – how it might draw inspiration, while avoiding the risk of aura, when the disciplinary appropriations are themselves so multi-faceted, and the object of analysis so complex and vast. I’m eager to dig into the details… If others who have read DeLanda would like to comment, I’d also be interested in learning what different folks have taken away from DeLanda’s work.
Probably the worst time of the year to post a bleg, but hopefully some folks might still see this when they trickle back from the holidays…
I’m interested in tracking down some useful articles or books on the history of the concept of bias in research methodology (or of related concepts such as the principle of observer neutrality as a normative ideal for research, etc.). I’m particularly interested in works that might track the initial articulations, spread and development of concepts related to the notion that, in order for research results to be robust, the research process must remove subjective and social influences on research outcomes.
I’ve had a sudden realisation – perhaps inspired by the Hamming article – that this information might be particularly useful for some of the problems I’ve been circling around… ;-P
I’ve mentioned previously that I’ve found myself reading much more draft student work this term than I normally do. While this has been a somewhat sudden development, the work involved is continuous with work I’ve done in other academic contexts – I don’t think I’m anyone’s notion of a master of English prose, but I have done a lot of thinking and teaching on academic writing, and believe I can provide at least passable assistance to most students who are struggling with the genre.
What has been more surprising this term has been the number of requests I’ve been receiving for consultations on research methodology. I realise it sounds a bit odd to be surprised by this, given that I’m teaching a research methods course. And I do love teaching into this course – it’s my favourite “subject” to teach, specifically because I enjoy the process of workshopping the logical connections between students’ broad interests, their narrower research questions and their methodologies. It’s one of the most creative teaching processes I currently engage in – an intrinsically unpredictable, decentred, energising form of teaching practice that would be very difficult to replicate in other contexts.
Still, before being invited to teach the course, I had never previously thought of myself as any kind of methods “expert”. Having taught the course for a year now… I still don’t… And yet here I am, sketching on scraps of paper and whiteboards, trying to help people map out connections between intellectual interests, research questions and methodologies… And, since I like the work and want to continue doing it, I’m engaged in a process of trying to increase my skills so that they begin to seem somewhat proportionate to the faith people are placing in them… Problem is, I’m not sure that all of this effort is getting me any closer to any kind of methodological expertise – instead, I mainly seem to find myself refining ways of communicating some fairly straightforward dimensions of academic practice, such as (in no particular order): Read more of this post
The discussion at Acephalous revolves, among other things, around the issue of to what degree a mistake like this should be considered a “Freudian slip” – that is, a slip of the tongue that signifies something meaningful about the speaker – in this case, latent racism.
Several complex issues range through this kind of debate for me. The first is the empirical status of Freudian theory – the question of how difficult it is for any interpretive theory (not just psychoanalysis) to extricate itself from problems of confirmation bias – of examining only those slips of the tongue, for example, that produce meaningful words that are potentially subject to interpretation, while overlooking the various stutterings and mis-steps that don’t appear to produce meaning. The second is the contested issue of whether psychoanalytic approaches have taken seriously the question of what evidence would be required to falsify or force a rethink of core concepts within the theory.
Yet these sorts of empirical questions, which have entered into other discussions of psychoanalytic theory at Acephalous in the past, were not really the core issue at stake in this particular debate. Rather, the major issue seemed to be the way in which the folk appropriation of psychoanalytic theory so often leads to something like a notion of “unconscious intentionality” – so that, once you believe, for example, that this slip of the tongue must be meaningful, and then conclude that the slip must signify a transgressive desire like unconscious racism, you then also judge the person for these unconscious impulses, as if the conscious mind must somehow have been complicit all along, for such unsavoury unconscious impulses to exist.
I tend to think of this issue by analogy with work I do on social structuration. I am interested in broad, pervasive patterns of historical change – in forms of perception, thought and practice that tend to span geographical regions, disciplinary boundaries, and fields of practical activity.
One common way of explaining the existence of patterns of historical change is to invoke a kind of conspiracy theory: to say, in effect, that “natural” or “unconscious” change ought to be random in character, so the existence of a meaningful pattern implies intentionality. Meaningful historical patterns then come to be taken as evidence that, somewhere in the background, some group of persons must be making conscious, deliberate choices to cause the world to become as it is. This mode of reasoning in the social sciences is of course analogous to the concept of Intelligent Design in the natural sciences – both approaches assume that complex patterns cannot arise in the absence of intention. Where Intelligent Design is marginalised in the natural sciences, however, variants of conspiracy theory can often be quite central to some social scientific traditions, in explicit or tacit forms.
I favour an alternative, which focusses on historical patterns as the unintentional consequences of actions that, even if they are consciously undertaken, are intended to produce very different results than what they actually effect. The interesting historical problem then becomes understanding why it should be the case that a non-random pattern should arise, if no one consciously intends to bring that pattern into being.
When examining the social realm, once we conclude that patterns are likely generated without conscious intent, it is fairly clear that there is no “place” where these unconscious social processes reside, other than in the myriad actions of the individuals who inadvertantly reproduce such patterns. When we look at nonconscious patterns that arise from the human mind, we are less sure – and, perhaps as a result, retroject notions of intentionality that could only ever be appropriately applied to conscious behaviour, into a nonconscious realm to which it doesn’t apply.
Ironically, I don’t see Freud as having this particular problem – I think he was quite clear, in his descriptions of the unruly, contradictory, fragmented id, that the logic of the conscious realm should not be applied to nonconscious actions – and, in fact, extrapolated that much suffering resulted precisely from guilt inappropriately experienced in relation to unconscious impulses. It is an interesting question whether, in still maintaining that unconscious impulses could be interpreted – that unconscious behaviours have meaning – Freud might inadvertantly have slipped a bit of the logic of the conscious world back into his analysis of the unconscious. But I won’t make any strong claims on this issue without thinking it through far more thoroughly than I have here…
Regardless, in percolating through popular culture, psychoanalytic concepts have retained the Freudian notion that unconscious desires are meaningful – but taken the unconscious as the cipher for the “true” person, such that inadvertant and unintentional acts are taken to be more fundamental, in some ways, than acts that are consciously chosen. In this respect, folk psychoanalytic categories join up with a phenomenon I blogged about a couple of weeks ago: the tendency, within the liberal economic and political tradition, to regard order that arises spontaneously as more “natural” than order that arises from conscious planning. This suspicion of consciousness is apparently an interesting red thread uniting many otherwise contradictory philosophies…
I’m not sure where this leaves me in terms of the issues discussed in the Acephalous thread. It does, though, sound a precautionary note on the need for theory (social and psychological) to take seriously both the reality of conscious intentions and the potential for non-conscious patterns, rather than reducing one of these phenomena to the level of appearance, in some sort of essence-appearance dichotomy.
One of my friends from college spent a frustrated semester constantly arguing with a classmate. Each time my friend seemed on the cusp of argumentative success, his opponent would pull out the same relativist conversation stopper: “Well, you know, there are millions of different ways of viewing every problem”. And so would end the debate.
My friend’s frustration grew and grew, until finally one day he burst out: “Yes! There are millions of different ways of viewing every problem – and some of them are WRONG!”
I was reminded of this story when the students in my Research Strategies course were discussing the ethics and politics of their research this evening. The concept of “bias” seemed to function as some sort of conversational attractor – no matter which direction we set out, we always seemed to end up circling around it.
The concept of bias often smuggles in its wake a tacit concept that the ideal researcher would be a fully disengaged and impassive observer. I don’t believe such a researcher exists – and neither do my students, of course. The question is whether the ideal of a disengaged observer is still a useful sort of ideal type – a sort of Habermasian ideal that no one will ever reach, but that is still useful, because it provides a standard against which we can criticise existing practices – or whether there is some alternative critical standard that does not require us to resort to a concept of disengaged research that will never correspond to social science practice.
My impulse is that we need critical standards that – while high and demanding – do suggest a form of social science that someone might actually practice, at least when functioning at their best. Social scientists in practice cannot be disengaged because, among other reasons, they are their own primary research tool – their ability to empathise and recreate within themselves a sense of the motives and the reasoning and the emotions of fellow human beings, their social acumen and insight, is an intrinsic dimension of social scientific research. Using the concept of the disengaged researcher as a critical ideal therefore stands in deep and fundamental tension with the practical requirements of social scientific research.
Using the concept of a more fully and completely engaged researcher, however, does not – and I suspect this is the direction we need to be reaching, to develop a clearer and more useful understanding of ideal social science practice. More fully engaged research would reflect on the potentials and insights that are historically available to us in a given moment, and would explore whether the research process reflects the highest ideals available to us at the time. It would therefore make use of the types of empathy and social insight required in social science research, rather than sitting in tension with social science practice.
This leaves open the question of how, in this embedded and historicised view of the world, you validly decide among the “millions of different ways of viewing every problem” to pick the ways that are “right” – that represent the highest potentials of your historical moment, and therefore provide you with the ability to justify claims that other views or practices should be considered “wrong”. I’m currently finishing an (overlong) piece on Adorno that explores this issue – once I’ve cut that piece down to manageable size, I may post some fragments on the blog.
I wanted to conclude this seriesofposts on Bent Flyvbjerg’s work with a brief discussion of his analysis of Habermas and Foucault. To many social theorists with a critical orientation, Foucault and Habermas appear to represent the key theoretical paths available to social critique. It is therefore common for a theorist to choose either Habermas or Foucault, with Habermasian theorists insisting that Foucault lapses into nihilism, and Foucaultian theorists asserting that Habermas advocates an oppressive consensus that leaves no room for difference. Flyvbjerg falls on the Foucaultian side of this theoretical divide, and I will suggest below that this choice causes him to miss some of Habermas’ core strengths and understate some of Foucault’s core weaknesses. A less partisan approach to both theorists, I suggest, might lead beyond a simple choice between the two, and onto a more adequate conception of critical theory.
Flyvbjerg criticises Habermas for seeking universal normative standards as the basis for his critical social theory. Flyvbjerg cites Habermas’ concept of the ideal speech situation – Habermas’ contention that, as beings who engage in communicative practice, all humans universally and necessarily understand the potential for the development of uncoerced consensus achieved by free and equal participants engaged in rational communication.
Flyvbjerg objects to this concept on two levels: he argues, first, that Habermas’ ideal speech situation can never be fully realised in practice – that power is always already present in any communication – and that Habermas’ approach therefore necessarily involves a gap between “is” and “ought”, between ideal and practice; he argues, next, that the actual realisation of the ideal speech situation – with its aim of universal consensus – would necessarily be oppressive in that it would suppress the inherent difference and diversity that always characterises all human communities. Flyvbjerg goes on to claim that Habermas’ approach involves a completely uncritical appropriation of modernity, while it leaves Habermas blind to the reality of power relations in contemporary society.
These are very common criticisms of Habermas from a Foucaultian approach, and yet they represent fundamental misunderstandings of the strategic intent of Habermas’ theoretical claims. By exploring that strategic intent a little more closely, it should be possible to assess Habermas’ work in a more balanced light, appreciating his insights, as well as developing a more targeted critique of the weaknesses of his approach.
Contrary to Flyvbjerg’s assertions, Habermas’ theory is not weakened by the observation that an ideal speech situation can never be realised in social practice, nor is it challenged by the observation that power relations will always exist in any human interaction. Similarly, Habermas does not seek to achieve universal consensus as some kind of prescriptive social ideal. Critiques based on the notion that universal consensus would be oppressive are therefore somewhat beside the point.
Instead, the strategic intent of Habermasian concepts such as that of an ideal speech situation, or of different action orientations that social actors can assume toward one another in a speech situation, is to demonstrate that all humans have access to critical forms of perception and thought, which they can then direct against the power relations embodied in existing social institutions, practices, and ideologies. The important thing for Habermas is not whether we can attain an ideal speech situation in our social practice: it is whether, as social actors, we can conceptualise what an ideal speech situation would be, if one could exist – whether we have been exposed to some form of perception and thought that introduces us to concepts of freedom, equality, absence of coercion, intersubjective agreement, and other normative standards Habermas brings to bear in his social critique.
Habermas’ intent is explicitly counter-factual: he believes that, if he can demonstrate that we have access to these critical forms of perception and thought, he can then account for the possibility that a social critique of an existing social institution might emerge – that people might declare that a particular social institution is, in fact, riddled with objectionable power relations – while still remaining within the boundaries of a secular, materialist social scientific analysis that does not appeal to religious sensibilities.
From this perspective, when Flyvbjerg’s dismissively criticises Habermas against Rorty’s claim that the “‘cash value’ of Habermas’ notions of discourse ethics and communicative rationality consists of the familiar political freedoms of modern pluralist democracy” (p. 98), Flyvbjerg demonstrates how poorly he grasps Habermas’ strategic intent. For this is precisely what Habermas sets out to do: to explain, in secular terms, how these “familiar political freedoms” have come to feel so familiar – often in spite of their flagrant contradiction to the practical power relations we experience in our everyday social life. Regardless of how we evaluate Habermas’ attempt to account for the historical emergence of these values – and I am very critical of Habermas’ account – the key question Habermas raises must somehow be addressed by any critical social theory that seeks to be consistent, to explain the possibility for the emergence of critical sensibilities, just as it also explains the possibility for the emergence of power relations in contemporary society.
Where Habermas can be validly criticised, I would argue, is over his failure to achieve this goal without appealing to fundamentally asocial mechanisms for inculcating critical forms of perception and thought. For, although Habermas avoids religious or metaphysical foundations for critique, and thereby remains in the purview of “materialism” in the broadest sense, he does not truly provide an account of the emergence of fully historical and socialised critical forms of perception and thought. Instead, he offers an account of how potentials that were “always already” embedded in the logic of communication – in human speech acts as such – were historically realised under particular social conditions. Having been realised, however – and this is crucial for the resistance-oriented character of Habermas’ theory – these critical potentials can never completely be extinguished. Instead, the critical potentials embedded in the fundamental logic of human communication stand, in Habermas’ account, somehow outside of the ebb and flow of society and history – like Kantian a prioris, categories in terms of which humanity judges historically specific forms of domination and abuses of power, but not categories that are formed completely in and through a particular historical form of social life.
Flyvbjerg of course also criticises Habermas for his lack of historical specificity and, in light of his similar critique, the distinction I am drawing here may seem pedantic. The “payoff”, however, can be seen when examining Flyvbjerg’s uncritical appropriation of Foucault.
Flyvbjerg appropriates Foucault as a model for an analysis of power relations, and for an understanding of the relationship between power relations and forms of knowledge. He approves of Foucault’s consistently historical genealogical method, and cites Foucault’s meta-theoretical statements to prove that that Foucault does not regard himself as somehow outside or above the history and the power relations he analyses, but rather as operating on the same historically and socially specified plane of existence. Flyvbjerg therefore rejects the Habermasian critique that Foucault is relativistic – arguing in reply that Foucault has never believed that “anything goes” (p. 99), nor advocated “value freedom” ala Weber (p. 126). He finally cites Foucault’s belief that thought provides freedom for critical forms of perception and thought, arguing (p. 127):
For Foucault, “[t]hought is freedom in relation to what one does”. Thought is not what inhabits a certain conduct and gives it meaning. Thought is, rather, what allows one to step back from this conduct an to “question its meaning, its conditions, and its goals”. Thus thought is the motion by which one detaches oneself from what one does and “reflects on it as a problem”. Thought is the ability to think differently in order to act differently. Thought defined in this manner – as reflexive thought aimed at action – stands at the core of Foucault’s ethics, which, then, is an ethics antithetical to any type of “thought-police”. Reflexive thought is therefore the most important “intellectual virtue” for Foucault, just as for Aristotle it is phronesis.
In this account, where Habermas is at least seeking to account for the forms of perception and thought that appear to underlie the democratic institutions and ideals of modernity, Foucault appears to be postulating a generic human capacity for critical thought, as such. If Habermas’ approach falls short of a fully historical critical theory by grounding critical forms of perception and thought in specific attributes of human communication, how much shorter must Foucault’s approach fall, when it appears to ground critical forms of subjectivity in the completely decontextualised and oddly Cartesian move: I think, therefore I critique.
Both approaches – contrary to the assertions of advocates on either side – fail to take seriously the possibility that, just as we can analyse specific types of domination by embedding them in their historical context, so might we also be able to analyse our specific normative standards – those forms of perception and thought that enable us to perceive power relations as dominating in a specific way, as abrogations of a particular type of potential freedom – by embedding those in their historical context.
From this perspective, Habermas at least recognises the need to account for his own critical sensibilities, even if he fears relativism too much to account for these sensibilities in fully historical and social terms. And Foucault at least recognises the potential for a fully social and historical form knowledge, even if he does not fully understand the need to account for the emergence of critical sensibilities.
Yet these two halves do not quite combine to make a theoretical whole: for that, I would argue, we need a fully historical critical theory, one which would provide a consistently historical account of how our shared form of social life can generate specific forms of domination, together with the potential for particular kinds of freedom. It is through an exploration of this alternative vision of critical theory, I would suggest, that we will come closest to realising Flyvbjerg’s goal of achieving a future that points beyond the domination of objectivist and instrumental rationality, and toward the realisation of a shared social life governed by a more substantive form of reason.