After a few months back on the conference speaking/attendance circuit, I’ve had something of a refresher course in the joys of academic meetings and decided it was time to write up the range of feelings — from irritation to rage — that have been stirred up as a result. I’m not going to name names in this piece, because in nearly every case the absence of value at the conference had little or nothing to do with the organisers and everything to do with the speakers and the audience.
So, I’ve never been to a TED Talk, but they look like quite lively affairs — lots of noise, lots of visuals, and a lot of wild applause at the end. Of course, TED speakers are also good speakers: charismatic, dynamic and, most importantly, engaged with the audience. It is therefore safe to say that you will not see someone at TED serially mumbling the text of their Powerpoint slide while avoiding any kind of eye contact with the audience.
Given that every TED talk ever is available online, I would like to ask why we academics are still so bad at all this? I suspect that specialisation is a key issue: many academic speakers spend so much time talking to their vanishingly small peer group that they manage to forget that the most basic thing about their research — the point — is not always obvious to the uninitiated. So rather than tell the rest of us why we should care about their findings, they launch into the minutiae of their methodology, into fifteen slides of figures whose contribution to the results is far from clear, and into such narrowly-defined findings (if any!) that I seriously consider billing them for wasting my time.
It seems to me that there’s one simple way to avoid this unpleasantness: the majority of slides must be able to satisfactorily answer the ‘so what?’ question. Most of the audience cares about two things: 1) what we found; and 2) how it might be relevant to their work. All that methodology? All those equations? The slides on data cleansing? Sorry, but if anyone actually cares about our work then they’ll probably ask us after the session or search for our papers online.
A passable talk isn’t that hard if we keep asking ‘so what?’. 1-3 slides (and no bloody more) of context will help the audience to figure out why you’re doing your work (‘why should I care about this talk?’). 2-4 methodological slides is *more* enough to figure out how you’ve done it (‘why should I think this work is robust/valid?’). The simple truth is that people who are already specialists in a field don’t need to see all the formulae since they already know the details. And the people who aren’t in the field won’t be able to make sense of the details anyway, so you’re just wasting time.
Right then, 5 or 6 slides and we’re on to presenting actual results! 4 or 5 slides about the exciting things you’ve found and and 2 more putting them in context (‘why should I remember all of this?’). And then a quick, graceful leap on to the concluding slide in which you a figure, a sentence or two, or possibly a bullet point summary reminds the audience of the key ‘takeaways’.
Unfortunately, sometimes it’s not the presentation, it’s the research itself that can’t clear the ‘so what?’ hurdle. Right now there’s a real fetishisation of ‘real-time’ as if we can tack on ‘real-time’ to a research programme to make in meaningful. First off, does the researcher mean hard real-time or soft real-time? They’re not the same and the difference is very, very important. Hard real-time is your airplane. Hard real-time is the Shinkansen being electronically tethered to seismographs able to to trigger an automatic — and immediate — shut-down of Japan’s HSR system when the markers of an earthquake are detected.
But seriously, do I need real-time data from my effin’ fridge? Call me crazy, but I suspect that it’s status is largely unchanged since I grabbed this morning’s lunch… maybe the cheese is a little softer and a little riper but that’s about it. I don’t even need soft real-time from a ‘smart fridge’, I just need it to tell me that, as of the last time it was opened, it contained a half-dozen eggs, some butter, and the rest of the makings of a tasty omelette.
Throwing ‘real-time’ into the ‘smart fridge’ is a clear case of focussing on the wrong problem (finding out what’s in the fridge). I recently sat through a talk on using real-time taxi data to do traffic light sensing. The speakers (who I would like to note clearly can do worthwhile research because I found some of it online afterwards) had amassed months’ worth of GPS data on thousands of taxis and demonstrated… that most taxi drivers in this city stop at red lights. But they had an acronym for it so it must be interesting! It must science!
In fact, they even had two (two!) 3D bar graphs complete with probability distribution functions done in MATLAB to prove that proximity to a red light was positively correlated with the car being stopped. With the best will in the world I can still only call “stating the bleedin’ obvious”! The authors went on to propose a second (real-time, natch) application of their GPS analysis: it could help city managers know the state of their traffic lights. So, the proposed output from hundreds of hours’ worth of data collection and processing was to give the city a partial, fragmented view of information that it already has in a complete, real-time format from its own traffic control centre.
This example (for which I apologise to those I’ve singled out) serves to highlight one of the more egregious and basic failures of which many of us across the full range of ‘tech-focussed’ research guilty: requirements. Yes, it’s a dirty word. A commercial word even. Academic research is, of course, supposed to be ‘blue sky’ (gag) but that doesn’t mean that it doesn’t need requirements. If you’re going to propose that you have solved a problem, then you might find it informative to talk to the people who supposedly have this problem first.
I was impressed recently by Columbia’s Sarah Williams: she and her colleagues had actually approached someone already involved in criminal justice research before ‘doing the visualisation’ that resulted in ‘Million Dollar Blocks‘. This work is striking because it means something, not just as a nice view of some data, but as something that is actually connected to real, on-the-ground policy. The requirements (whether explicitly or implicitly) emerged through interaction with an ‘end user’ intimately involved in the development process. From what I can tell, the Spatial Information Design Lab‘s approach to visualisation respects both its subject and its potential audience.
I only wish that your average academic conference speaker would do the same. There are nowhere near enough conferences where the majority of speakers respect their audience enough to start and finish on time. One session at a conference I attended ran 45 minutes over on a 1 hour time slot! A simple rule of thumb for speakers: you cannot have more slides than you have minutes in which to talk. I don’t care how quickly you *think* you’ll move through it, you won’t manage Φ slides in < Φ minutes. You may make it through an average of one slide a minute if you’re lucky, but most of us aren’t.
Academic ‘superstars’ are often the most guilty of this. I know of one famous academic who, while walking over to the stage, asked their ‘escort’ “so what is this conference about again?” The speaker had already sent over a presentation (clearly without doing any work to adapt it for the audience) and apparently planned to ‘switch it up’ on the fly to make it ‘work’. Fail.
I also attended a well-designed conference that started with focussed work groups who reported back to the rest of the conference using the superstars as the communicators: these bright lights would summarise the findings of the group and add their bit of pizzazz to the proceedings. Except that some of the superstars flew in after the working groups had finished their work. And some of the others clearly didn’t think that the work groups had had anything interesting to say even though they contained many smart, expert people. So instead of gaining insight into the consensus challenges, we got one person’s partial, and sometimes downright egotistical view of the situation. A huge opportunity for real learning was lost.
I can’t help wondering if the bad behaviour of some speakers is a contributory factor in the increasingly bad behaviour of some audiences. Because I certainly can’t explain how else we’ve come to the point that I feel I have to actually write the following things down:
1. Don’t talk during the talk.
2. Don’t check your email during the talk if you’re sitting at the front of audience.
3. Don’t turn up late and casually make your way to the front of the audience for a seat.
4. Don’t pass your laptop to someone else to show them something during the talk.
During one talk I gave recently, I was actually tempted to say to a presumably adult male (he had a white beard that suggested some degree of learnedness): “is there something you’d like to share with the class?” From where I was standing, the first four rows consisted entirely of people checking email, having discussions about emails, and sending inane FB messages.
I don’t have to infer this, because these people did the exact same thing during every other talk I sat through that morning. If you don’t want to be there, then don’t make life unpleasant for everyone else. Leave. Grown men (and it’s usually men) should know that we stopped taking attendance in high school. Or if you have to attend for some reason but don’t want to listen to the talks, then just sit your ass down at the back of the room so that the people who actually do want to be there can accomplish something useful.
These experiences have been so demoralising to me as a younger/youngish researcher that I seriously wonder if there’s much point in going to anything that isn’t either a) around the corner, so I’d go anyway; or b) on the other side of the planet, so I’d like to visit the country/city anyway. However, I actually think that there are three simple steps that conference organisers could take to make their conferences dramatically more useful:
1. Turn off WiFi. I know this won’t make me a popular man, but I’m casting my vote right now for unplugging WiFi at the start of each and every conference session. If having an internet connection is so important that the audience can’t live without it then they can go outside and deal with their email there.
2. Provide zero power inside the lecture halls. Although people these days often have laptops able to last through the session, at least they’ll think a little bit before running their WiFi and the agent-based model simultaneously. They might actually listen.
3. Enforce stated timings. Probably the easiest thing to do and, for some strange reason, the hardest to enforce. Speakers should be given a 2 minute warning (a discreet bell seems to be less distracting than a visual cue like a card reading ‘2 minutes’), a 30 second warning, and then they should be cut-off politely and thanked. End of story.
So there you have it, tirade over. I’m sure that if you’ve stuck with me this far then you understand that this post stems from a desire to see conferences not be a waste of anyone’s time: yours, mine, or the organisers’. And at its heart, the problem is simply one of respect: for the speaker, for the audience, and for the organiser. So here’s to hoping that the next conference I attend has wild clapping at the end of talk about using GPS data to analyse urban systems.