Thursday 19 July 2012

The bewildering bathroom challenge


The tap is a simple but genius piece of design. You turn a handle one way and water flows. You turn it the other way, and the water stops. The bathplug is even simpler. You find a pliable, waterproof substance and cut it to fit exactly into the hole out of which the water flows, and you equip it with handle or chain on top that you can grasp to remove it.
Hotels the world over, however, are not satisfied with such simplicity. They conspire to make the task of producing water and containing it ever more baffling. I had wondered whether they were just more focused on appearance than function, but this website makes it clear that it's deliberate: It's worth quoting the blurb in full:
"A lot of attention in the design world is focused on creating products that are intuitive and easy to use, but sometimes a little ambiguity can be a good thing. Designed for use in restaurant and hotel bathrooms these taps embrace ambiguity to create a sense of intrigue to provide a more engaging interaction."
Hmm. Ambiguity is not really what I'm seeking in a bathroom. And engaging isn't the word I'd use for my interaction, as I try turning, pressing, pulling levers and dials and waving my hands around under taps. It usually ends in quite a bit of swearing. And that's in the cases where I can actually find something to push, pull or turn.
I wonder if there is some secret competition, known only to hoteliers, scored as follows:
  • 1 point for each room where a pool of water in basin indicates they haven't mastered the plug
  • 2 points for each guest who gets a wet head when trying to turn on the bath tap
  • 3 points for each guest who has to get someone from reception to explain how to turn on the tap
  • 4 points for each guest too shy to ask reception so doesn't wash during their stay
Hotels in the old Soviet Union had a simpler approach to frustrating their guests - they just didn't provide a bath plug.

Sunday 15 July 2012

The devaluation of low-cost psychological research

Psychology encompasses a wide range of subject areas, including social, clinical and developmental psychology, cognitive psychology and neuroscience. The costs of doing different types of psychology vary hugely. If you just want to see how people remember different types of material, for instance, or test children's understanding of numerosity, this can be done at very little cost. For most of the psychology I did as an undergraduate, data collection did not involve complex equipment, and data analysis was pretty straightforward - certainly well within the capabilities of a modern desktop computer. The main cost for a research proposal in this area would be for staff to do data collection and analysis. Neuroscience, however, is a different matter. Most kinds of brain imaging require not only expensive equipment, but also a building to house it and staff to maintain it, and all or part of these costs will be passed on to researchers. Furthermore, data analysis is usually highly technical and complex, and can take weeks, or even months, rather than hours. A project that involves neuroimaging will typically cost orders of magnitude more than other kinds of psychological research.
In academic research, money follows money. This is quite explicit in funding systems that reward an institution in proportion to their research income. This makes sense: an institution that is doing costly research needs funding to support the infrastructure for that research. The problem is that the money, rather than the research, can become the indicator of success. Hiring committees will scrutinise CVs for evidence of ability to bring in large grants. My guess is that, if choosing between one candidate with strong publications and modest grant income vs. another with less influential publications and large grant income, many would favour the latter. Universities, after all, have to survive in a tough financial climate, and so we are all exhorted to go after large grants to help shore up our institution's income. Some Universities have even taken to firing people who don't bring in the expected income. This means that cheap cost-effective research in traditional psychological areas will be devalued relative to more expensive neuroimaging.
I have no quarrel, in principle, with psychologists doing neuroimaging studies - some of my best friends are neuroimagers -  and it is important that if good science is to be done in this area that it should be properly funded. I am uneasy, though, about an unintended consequence of the enthusiasm for neuroimaging, which is that it has led to a devaluation of the other kinds of psychological research. I've been reading Thinking Fast and Slow, by Daniel Kahneman, a psychologist who has the rare distinction of being a Nobel Laureate. This is just one example of a psychologist who has made major advances without using brain scanners. I couldn't help thinking that Kahneman would not fare well in the current academic climate, because his experiments were simple, elegant ... and inexpensive.
I've suggested previously that systems of academic rewards need to be rejigged to take into account not just research income and publication outputs, but the relationship between the two. Of course, some kinds of research require big bucks, but large-scale grants are not always cost-effective. And on the other side of the coin, there are people who do excellent, influential work on a small budget.
I thought I'd see if it might be possible to get some hard data on how this works in practice. I used data for Psychology Departments from the last Research Assessment Exercise (RAE), from this website, and matched this up against citation counts for publications that came out in the same time period (2000-2007) from Web of Knowledge. The latter is a bit tricky, and I'm aware that figures may contain inaccuracies, as I had to search by address, using the name of the institution coupled with the words Psychology and UK. This will miss articles that don't have these words in the address. Also when double-checking the numbers, I  found that for a search by address, results can fluctuate from one occasion to the next. For these reasons, I'd urge readers to treat the results with caution, and I won't refer to institutions by name. Note too that though I restrict consideration to articles between 2000-2007, the citations extend beyond the period when the RAE was completed. Web of Knowledge helpfully gives you an H-index for the institution if you ask for a citation report, and this is what I report here, as it is more stable across repeated searches than the citation count. Figure 1 shows how research income for a department relates to its H-index, just for those institutions deemed research active, which I defined as having a research income of at least £500K over the reporting period. The overall RAE rating is colour-coded into bandings, and the symbol denotes whether or not the departmental submission mentions neuroimaging as an important part of its work.
Data from RAE and Web of Knowledge: treat with caution!
Several features are seen in these data, and most are unsurprising:
  • Research income and H-index are positively correlated, r = .74 (95%CI .59-.84) as we would expect. Both variables are correlated with the number of staff entered in the RAE, but the correlation between them remains healthy when this factor is partialled out, r = .61 (95%CI .40-.76).
  • Institutions coded as doing neuroimaging have bigger grants: after taking into account differences in number of staff, the mean income for departments with neuroimaging was £7,428K and for those without it was £3,889K (difference significant at p = .01).
  • Both research income and H-index are predictive of RAE rankings: the correlations are .68 (95% CI .50-.80) for research income and .79 (95% CI .66-.87) for H-index, and together they account for 80% of the variance in rankings. We would not expect perfect prediction, given that the RAE committee went beyond metrics to assess aspects of research quality not reflected in citations or income. And in addition, it must be noted that the citations counted here are for all researchers at a departmental address, not just those entered in the RAE.
A point of concern to me in these data, though, is the wide spread in H-index seen for those institutions with the highest levels of grant income. If these numbers are accurate, some departments are using their substantial income to do influential work, while others seem to achieve no more than other departments with much less funding. There may be reasonable explanations for this - for instance, a large tranche of funding may have been awarded in the RAE period but not had time to percolate through to publications. But nevertheless, it adds to my concern that we may be rewarding those who chase big grants without paying sufficient attention to what they do with the funding when they get it.
What, if anything, should we do about this? I've toyed in the past with the idea of a cost-efficiency metric (e.g. citations divided by grant income), but this would not work as a basis for allocating funds, because some types of research are intrinsically more expensive than others. In addition, it is difficult to get research funding, and success in this arena is in itself an indicator that the researchers have impressed a tough committee of their peers. So, yes, it makes sense to treat level of research funding as one indicator of an institution's research excellence when rating departments to determine who gets funding. My argument is simply that we should be aware of the unintended consequences if we rely too heavily on this metric. It would be nice to see some kind of indicator of cost-effectiveness included in ratings of departments alongside the more traditional metrics. In times of financial stringency, it is particularly short-sighted to discount the contribution of researchers who are able to do influential work with relatively scant resources.


Friday 13 July 2012

Communicating science in the age of the internet


© www.CartoonStock.com
Here's an interesting test for those on Twitter. You see a tweet giving a link to an interesting topic. You click on the link and see it's a YouTube piece. Do you (a) feel pleased that it's something you can watch or (b) immediately lose interest. The answer is likely to depend on content, but also on how long it is. Typically, if I see a video is longer than 3 minutes, I'll give up unless it looks super-interesting.
Test #2 is for those of you who are scientists. You have to give a presentation about a recent piece of work to a non-specialist audience. How long do you think you will need? (a) one hour; (b) 20 minutes; (c) 10 minutes; (d) 3 minutes.
If you're anything like me, there's a disconnect between your reactions to these different scenarios. The time you feel you need to communicate to an audience is much greater than the time you are willing to spend watching others. Obviously, it's not a totally fair comparison: I'm willing to spend up to an hour listening to a good lecture (but no more!); though to tell the truth, it's an unusual lecturer who can keep me interested for the whole duration.
Those who use the internet to communicate science have learned that the traditional modes of academic communication are hopelessly ill-suited for drawing in a wider audience. TED talks have been a remarkably successful phenomenon, and are a million miles from the normal academic lecture: the ones I've seen are typically no longer than 15 minutes and make minimal use of visual aids. The number of site visits for TED talks is astronomically higher than, for instance, Cambridge University's archive of Film Interviews With Leading Thinkers, where Aaron Klug has had around 300 hits in just over one year, and Fred Sanger a mere 148. The reason is easy to guess: many of these Cambridge interviews last two hours or more. They constitute priceless archive material, and a wealth of insights into the influences that shape great academic minds, but they aren't suited to the casual viewer.
For most academics, though, shorter pieces pose a dilemma: they don't allow you to present the evidence for what you are saying. I felt this keenly when viewing a TED talk by autism expert Ami Klin. At 22 minutes, this was rather longer than the usual TED talk, but Klin is an engaging speaker, and he held my attention for the whole time. As I listened, though, I became increasingly uneasy. He was making some pretty dramatic claims. Specifically, as the accompanying blurb stated: "Ami Klin describes a new early detection method that uses eye-tracking technologies to gauge babies' social engagement skills and reliably measure their risk of developing autism". I was very surprised at the claims made for eye-tracking, and the data shown in the presentation were unconvincing. More generally, Klin talked about universal screening for 6-month-olds, but I was not sure that he understood the requirements for an effective screening test. After the end of the talk I checked out Klin's publications on Web of Science and couldn't find any published papers that gave a fuller picture to back up this claim. I asked my colleagues who work in autism and none of them was aware of such evidence. I emailed Klin last week to ask if he can point me to relevant sources but so far I've not had a reply. (If I do, I'll add the information). At the time of writing, his talk has had over 132,000 views.
So we have a dilemma here. Nearly everyone agrees that scientists should engage with audiences beyond their traditional narrow academic confines. But the usual academic lecture, saturated with PowerPoint explaining and justifying every statement, is ill-suited to such an audience. However, if we reduce our communications to the bottom line, then the audience has to take a lot on trust. It may be impossible to judge whether the speaker is expressing an accepted mainstream view. If, as in the Klin case, the speaker is both famous and charismatic, then it's unlikely that a general audience will realise that many experts in his field would want to see a lot more hard evidence before accepting what he was saying.
I've been brooding about this issue because I've recently joined up with some colleagues in a web-based campaign to raise awareness of language impairments in children. My initial idea was that we'd post lectures by experts, attempting to explain what we know about the nature, causes, and impacts of language impairments. Fortunately, we were dissuaded from this idea by our friends in TeamSpirit, a public relations company who have come on board to help us get launched. With their assistance, we've posted several videos and worked out a clearer idea of what our YouTube channel should do. We will have professionally produced films that feature the experiences of young people with language impairments and their families, as well as the professionals working with them. But we also wanted to ensure that the material we put out was evidence-based, and to include some pieces on issues where there were relevant research findings. We were advised that any piece by a talking academic head should be no more than 3 minutes long. I could see the wisdom of that, given my own reactions to longer video pieces. But I was uncomfortable. In 3 minutes, it's impossible to do more than give a bottom line. I didn't want people to have to take what I said on trust: I wanted them to have access to the evidence behind it. Well, we're now experimenting with an approach that I think may work to keep everyone happy. Our academic-style talks will stick to the 3 minute limit, but will be associated with a link to a PowerPoint presentation which will give a fuller account. This is still shorter than the usual academic talk - we aim for around 15-20 slides, all of which should be self-explanatory without needing an oral narrative. And, crucially, the PowerPoint will include references to peer-reviewed research to support what is said, and will include a link to a reference list, including where possible a review article. I anticipate that most people who visit our YouTube site will only get as far as the 3 minute video. That's absolutely fine - after all, only a small proportion of potential visitors will be evidence geeks. But, importantly, the evidence will be there for those who want it. The PowerPoint will give the bare bones, and the references will allow people to track back to the original sources.
We live in exciting times, where it has become remarkably easy to harness the power of the internet to disseminate research. The challenge is to do so in a way that is effective while preserving academic rigour.