University Life: Why now is the wrong time of the academic year to get anything done….

A picture of a calendar
Better luck next year?

… and why that’s true for absolutely any value of “now”…..

September
“It’s the start of the academic year soon, everyone’s concentrating on preparing their teaching”

October
“It’s the busiest time for teaching.  I’ve got 237 tutorials this week alone”

November
“I’ve got about 4,238 essays to mark.”

December
“It’s nearly Christmas, nothing gets done at this time of year”

January
“I’ve got sixty thousand exam scripts to mark”

February
[See October]

March-April
“With the Easter break coming up, well…”

May
[See January, but with added Finalist-related urgency and some conferences]

June-August
“Conference season…. annual leave…. concentrated period of research in Tuscany

A few scrawled lines in defence of the ESRC…

A picture of lotto balls
Lotto? Balls

There’s a very strange article in the Times Higher today which claims that the ESRC’s latest “grant application figures raise questions about its future”.

Er…. do they?  Seriously?  Why?

It’s true that success rates are a problem – down to 16% overall, and 12% for the Research Grants Scheme (formerly Standard Grants.  According to the article, these are down from 17% and 14% from the year before.  It’s also true that RCUK stated in 2007 that 20% should be the minimum success rates.  But this long term decline in success rates – plus a cut in funding in real terms – is exactly why the ESRC has started a ‘demand management’ strategy.

A comment attributed to one academic (which could have been a rhetorical remark taken out of context) appears to equate the whole thing to a lottery,and calls for the whole thing to be scrapped and the funding distributed via the RAE/REF.  This strikes me as an odd view, though not one, I’m sure, confined to the person quoted.  But it’s not a majority view, not even among the select number of academics approached for comments.  All of the other academics named in the article seem to be calling for more funding for social sciences, so it would probably be legitimate to wonder why the focus of the article is about “questions” about the ESRC’s “future”, rather than calls for more funding.  But perhaps that’s just how journalism works.  It certainly got my attention.

While I don’t expect these calls for greater funding for social science research will be heard in the current politico-economic climate, it’s hard to see that abolishing the ESRC and splitting its budget will achieve very much.  The great strength of the dual funding system is that while the excellence of the Department of TopFiveintheRAE at the University of Russell deserves direct funding, it’s also possible for someone at the Department of X at Poppleton University to get substantial funding for their research if their research proposal is outstanding enough.  Maybe your department gets nothing squared from HEFCE as a result of the last RAE, but if your idea is outstanding it could be you – to use a lottery slogan.  This strikes me as a massively important principle – even if in practice, most of it will go to the Universities of Russell.  As a community of social science scholars, calling for the ESRC to be abolished sounds like cutting of the nose to spite the face.

Yes, success rates are lower than we’d like, and yes, there is a strong element of luck in getting funded.  But it’s inaccurate to call it a “lottery”.  If your application isn’t of outstanding quality, it won’t get funded.  If it is, it still might not get funded, but… er… that’s not a lottery.  All of the other academics named in the article seem to be calling for more funding for the social sciences.

According to the ESRC’s figures between 2007 and 2011, 9% of Standard Grant applications were either withdrawn or rejected at ‘office’ stage for various reasons.  13% fell at the referee stage (beta or reject grades), and 21% fell at the assessor stage (alpha minus).  So… 43% of applications never even got as far as the funding panel before being screened out on quality or eligibility grounds.

So… while the headline success rate might be 12%, the success rates for fundable applications are rather better.  12 funded out of 100 applications is 12%, but 12 funded out of 57 of the 100 of the applications that are competitive is about 28%.  That’s what I tell my academic colleagues – if your application is outstanding, then you’re looking at 1 in 4.  If it’s not outstanding, but merely interesting, or valuable, or would ‘add to the literature’, then look to other (increasingly limited) options.

So…. we need the ESRC.  It would be a disaster for social science research if it were not to have a Research Council.  We may not agree with everything it does and all of the decisions it makes, we may be annoyed and frustrated when they won’t fund our projects, but we need a funder of social science with money to invest in individual research projects, rather than merely in excellent Departments.

What would wholesale academic adoption of social media look like?

A large crowd of people
Crowded out?

Last week, the LSE Impact of Social Sciences blog was asking for its readers and followers to nominate their favourite academic tweeters.  This got me thinking.  While that’s a sensible question to ask now, and one that could create a valuable resource, I wonder whether the question would make as much sense if asked in a few years time?

The drivers for academics (and arguably academic-related types like me) to start to use twitter and to contribute to a blog are many – brute self-promotion; desire to join a community or communities; to share ideas; to test ideas; to network and make new contacts; to satisfy the impact requirements of the research funder; and so on and so forth.  I think most current PhD students would be very well advised to take advantage of social media to start building themselves an online presence as an early investment in their search for a job (academic or otherwise).   I’d imagine that a social media strategy is now all-but-standard in most ESRC ‘Pathways to Impact’ documents.  Additionally, there are now many senior, credible, well-established academic bloggers and twitterers, many of whom are also advocates for the use of social media.

So, what would happen if there was a huge upsurge in the number of academics (and academic-relateds) using social media?  What if, say, participation rates reach about 20% or so?  Would the utility of social media scale, or would the noise to signal ratio be such that its usefulness would decrease?

This isn’t a rhetorical question – I’ve really no idea and I’m curious.  Anyone?  Any thoughts?

I guess that there’s a difference between different types of social media.  I have friends who are outside the academy and who have Twitter accounts for following and listening, rather than for leading or talking.  They follow the Brookers and the Frys and the Goldacres, and perhaps some news sources.  They use Twitter like a form of RSS feed, essentially.

But what about blogging, or using Twitter to transmit, rather than to receive?  If even 10% of academics have an active blog, will it still be possible or practical to keep track of everything relevant that’s written.  In my field, I think I’ve linked to pretty much every related blog (see links in the sidebar) in the UK, and one from Australia.  In certain academic fields it’s probably similarly straightforward to keep track of everyone significant and relevant.  If this blogging lark catches on, there will come a point at which it’s no longer possible for anyone to keep up with everything in any given field.  So, maybe people become more selective and we drop down to sub-specialisms, and it becomes sensible to ask for our favourite academic tweeters on non-linear economics, or something like that.

On the other hand, it might be that new entrants to the blogging market will be limited and inhibited by the number already present.  Or we might see more multi-author blogs, mergers etc and so on until we re-invent the journal.  Or strategies that involve attracting the attention and comment of influential bloggers and the academic twitterati (a little bit of me died inside typing that, I hope you’re happy….).  Might that be what happens?  That e-hierarchies form (arguably they already exist) that echo real world hierarchies, and effectively squeeze out new entrants?  Although… I guess good content will always have a chance of ‘going viral’ within relevant communities.

Of course, it may well be that something else will happen.  That Twitter will end up in the same pile as MySpace.  Or that it simply won’t be widely adopted or become mainstream at all.  After all, most academics still don’t have much of a web 1.0 presence beyond a perfunctory page on their Department website.

That’s all a bit rambly and far longer than I meant it to be.  But as someone who is going to be recommending the greater use of social media to researchers, I’d like to have a sense of where all this might be going, and what the future might hold.  Would the usefulness of social media as an academic communication, information sharing, and networking tool effectively start to diminish once a certain point is reached?  Or would it scale?

 

The ESRC and “Demand Management”: Part 4 – Quotas and Sanctions, PIs and Co-Is….

A picture of Stuart Pearce and Fabio Capello
It won't just be Fabio getting sanctioned if the referee's comments aren't favourable.

Previously in this series of posts on ESRC Demand Management I’ve discussed the background to the current unsustainable situation and aspects of the initial changes, such as the greater use of sifting and outline stages, and the new ban on (uninvited) resubmissions.  In this post I’ll be looking forward to the possible measures that might be introduced in a year or so’s time should application numbers not drop substantially….

When the ESRC put their proposals out to consultation, there were four basic strategies proposed.

  • Charging for applications
  • Quotas for numbers of applications per institution
  • Sanctions for institutions
  • Sanctions for individual researchers

Reading in between the lines of the demand management section of the presentation that the ERSC toured the country with in the spring, charging for applications is a non-starter.  Even in the consultation documents, this option only appeared to be included for the sake of completeness – it was readily admitted that there was no evidence that it would have the desired effect.

I think we can also all-but-discount quotas as an option.  The advantage of quotas is that it would allow the ESRC to precisely control the maximum number of applications that could be submitted.  Problem is, it’s the nuclear option, and I think it would be sensible to try less radical options first.  If their call for better self-regulation and internal peer review within institutions fails, and then sanctions schemes are tried and fail, then (and only then) should they be thinking about quotas.  Sanctions (and the threat of sanctions) are a seek to modify application submission behaviour, while quotas pretty much dictate it.  There may yet be a time when Quotas are necessary, though I really hope not.

What’s wrong with Quotas, then?  Well, there will be difficulties in assigning quotas fairly to institutions, in spite of complex plans for banding and ‘promotion’ and ‘relegation’ from the bands.  That’ll lead to a lot of game playing, and it’s also likely that there will be a lot of mucking around with the lead applicant.  If one of my colleagues has a brilliant idea and we’re out of Quota, well, maybe we’ll find someone at an institution that isn’t and ask them to lead.  I can imagine a lot of bickering over who should spend their quota on submitting an application with a genuinely 50-50 institutional split.

But my main worry is that institutions are not good at comparing applications from different disciplines.  If we have applications from (say) Management and Law vying for the last precious quota slot, how is the institution to choose between them?  Even if it has experts who are not on the project team, they will inevitably have a conflict of interest – there would be a worry that they would support their ‘team’.  We could give it a pretty good cognate discipline review, but I’m not confident we would always get the decision right.  It won’t take long before institutions start teaming up to provide external preliminary peer review of each other’s applications, and before you know it, we end up just shifting the burden from post-submission to pre-submission for very little gain.

In short, I think quotas are a last resort idea, and shouldn’t be seriously considered unless we end up in a situation where a combination of (a) the failure of other demand management measures, and/or (b) significant cuts in the amount of funding available.

Which leaves sanctions – either on individual researchers or on their institutions.  The EPSRC has had a policy of researcher sanctions for some time, and that’s had quite a considerable effect.  I don’t think it’s so much through sanctioning people and taking them out of the system so much as a kind of chill or placebo effect, whereby greater self-selection is taking place.  Once there’s a penalty for throwing in applications and hoping that some stick, people will stop.

As I argued previously, I think a lot of that pressure for increased submissions is down to institutions rather than individuals, who in many cases are either following direct instructions and expectations, or at least a very strong steer.  As a result, I was initially in favour of a hybrid system of sanctions where both individual researchers and institutions could potentially be sanctioned.  Both bear a responsibility for the application, and both are expected to put their name to it.  But after discussions internally, I’ve been persuaded that individual sanctions are the way to go, in order to have a consistent approach with the EPSRC, and with the other Research Councils, who I think are very likely to have their own version.  While the formulae may vary according to application profiles, as much of a common approach as possible should be adopted, unless of course there are overwhelming reasons why one of the RCs that I’m less familiar with should be different.

For me, the big issue is not whether we end up with individual, institutional, or hybrid sanctions, but whether the ESRC go ahead with plans to penalise co-investigators (and/or their institutions) as well as PIs in cases where an application does not reach the required standard.

This is a terrible, terrible, terrible idea and I would urge them to drop it.  The EPSRC don’t do it, and it’s not clear why the ESRC want to.  For me, the co-I issue is more important than which sanction model we end up with.

Most of the ESRC’s documents on demand management are thoughtful and thorough.  They’re written to inform the consultation exercise rather than dictate a solution, and I think the author(s) should be – on the whole – congratulated on their work.  Clearly a lot of hard work has gone into the proposals, which given the seriousness of the proposals is only right.  However, nowhere is there to be found any kind of argument or justification that I can find for why co-investigators (insert your own ‘and/or institutions’ from here on)  should be regarded as equally culpable.

I guess the argument (which the ESRC doesn’t make) might be that an application will be given yet more careful consideration if more than the principal investigator has something to lose.  At the moment, I don’t do a great deal if an application is led from elsewhere – I offer my services, and sometimes that offer is taken up, sometimes it isn’t.  But no doubt I’d be more forceful in my ‘offer’ if a colleague or my university could end up with a sanctions strike against us.  Further, I’d probably be recommending that none of my academic colleagues get involved in an application without it going through our own rigorous internal peer review processes.  Similarly, I’d imagine that academics would be much more careful about what they allowed their name to be put to, and would presumably take a more active role in drafting the application.  Both institutions and individual academics, can, I think, be guilty of regarding an application led from elsewhere as being a free roll of the dice.  But we’re taking action on this – or at least I am.

The problem is that these benefits are achieved (if they are achieved at all) at the cost of abandoning basic fairness.  It’s just not clear to me why an individual/institution with only a minor role in a major project should be subject to the same penalty as the principal investigator and/or the institution that failed to spot that the application was unfundable.  It’s not clear to me why the career-young academic named as co-I on a much more senior colleague’s proposal should be held responsible for its poor quality.  I understand that there’s a term in aviation – cockpit gradient – which refers to the difference in seniority between Pilot and Co-Pilot.  A very senior Pilot and a very junior co-Pilot is a bad mix because the junior will be reluctant to challenge the senior.  I don’t understand why someone named as co-I for an advisory role – on methodology perhaps, or for a discrete task, should bear the same responsibility.  And so on and so forth.  One response might be to create a new category of research team member less responsible than a ‘co-investigator’ but more involved in the project direction (or part of the project direction) than a ‘researcher’, but do we really want to go down the road of redefining categories?

Now granted, there are proposals where the PI is primus inter pares among a team of equally engaged and responsible investigators, where there is no single, obvious candidate for the role of PI.  In those circumstances, we might think it would be fair for all of them to pay the penalty. But I wonder what proportion of applications are like this, with genuine joint leadership?  Even in such cases, every one of those joint leaders ought to be happy to be named as PI, because they’ve all had equal input.  But the unfairness inherent in only one person getting a strike against their name (and other(s) not), is surely much less unfair than the examples above?

As projects become larger, with £200k (very roughly, between two and two and a half person-years including overheads and project expenses) now being the minimum, the complex, multi-armed, innovative, interdisciplinary project is likely to be come more and more common, because that’s what the ESRC says that it wants to fund.   But the threat of a potential sanction (or step towards sanction) for every last co-I involved is going to be a) a massive disincentive to large-scale collaboration, b) a logistical and organisational nightmare, or c) both.

Institutionally, it makes things very difficult.  Do we insist that every last application involving one of our academics goes through our peer review processes?  Or do we trust the lead institution?  Or do we trust some (University of Russell) but not others (Poppleton University)?  How does the PI manage writing and guiding the project through various different approval processes, with the danger that team members may withdraw (or be forced to withdraw) by their institution?  I’d like to think that in the event of sanctions on co-Is and/or institutions that most Research Offices would come up with some sensible proposals for managing the risk of junior-partnerdom in a proportionate manner, but it only takes one or two to start demanding to see everything and to run everything to their timetable to make things very difficult indeed.

Academics v. University administrators…. part 94…

A picture from the TV programme 'Yes Minister'This week’s Times Higher has another article about Benjamin Ginsberg’s book  The Fall of the Faculty: The Rise of the All-Administrative University and Why It Matters.  It’s written about the US, but it has obvious implications for the UK t00, where complaints from some academics about “bureaucrats” are far from uncommon.  Whether it’s that administrators are taking over, or that the tail is wagging the dog, or that we’re all too expensive/have too much power/are too numerous, such complaints are far from uncommon in the UK.

There’s two ways, I think, in which I would like to respond to Ginsberg and his ilk.  And it’s the “ilk” I’m more interested, as I haven’t read his book and don’t intend to.

The first way I could respond is to write a critical blog post, probably with at least one reference to the classic ‘what have the Romans ever done for us?‘ scene in Monty Python’s Life of Brian (“But apart from recruiting our students, hiring our researchers, fixing our computers, booking our conferences, balancing the books, and timetabling our classes, what have administrators ever done for us?”).  It would probably involve a kind of riposte-by-parody – there are plenty of things I could say about academics based upon stereotypes and a lack of understanding, insight, or empathy into what their roles actually entail.  Something about having summers off, being unable or unwilling or unable to complete even the most basic administrative tasks, being totally devoid of any common sense, rarely if ever turning up at work… etcetera and so on.  I might even be tempted to chuck in an anecdote or two, like the time when I had to explain to an absolutely furious Prof exactly why good governance meant that I wasn’t allowed to simply write a cheque – on demand – on the university’s behalf to anyone she chose to nominate.

The second way of responding is to consider whether Ginsberg and other critics might have a point.

On the whole, I don’t think they do, and I’ll say why later on.  But clearly, reading the views attributed to Ginsberg, some of the comments that I’ve heard over the years, and the kind of comments that get posted below articles like Paul Greatrix’s defence of “back office” staff (also in the Times Higher), there’s an awful lot of anger and resentment out there – barely constrained fury in some cases.  And rather than simply dismissing it, I think it’s worthwhile for non-academics to reflect on that anger, and to consider whether we’re guilty of any of the sins of which we’re accused.

I didn’t want to be a university administrator when I was growing up.  It’s something I fell into almost by accident.  I had decided against “progressing” my research from MPhil to PhD, because although  I was confident that I could complete a PhD (I passed my MPhil without corrections), I was much less confident about the job market.  Was I good enough to be an academic?  Maybe.  Did I want it enough?  No.  But it gave me a level of understanding and insight into – and a huge amount of respect for – those who did want it enough.  Two more years (at least) living like a student?  Being willing to up sticks and move to the other end of the country or the other side of the world for a ten month temporary contract?  Thanks, but not for me.  I was ready to move towards putting down roots.  I was all set to go off and start teacher training when a job at Keele University came up that caught my eye.  And that job was on what was then known as the “academic related” scale.  And that’s how I saw myself, and still do.  Academic related.

My point is, I didn’t sign up to be obstructive, to wield power over academics, to build an ’empire’, or – worst of all – to be a jobsworth.  I’ve never had a role where I’ve actually had formal authority over academics, but I have had roles where I’ve been responsible for setting up and running approval processes – for conference funding, for sabbatical leave, for the submission of research grant applications, and (at the moment) for ethical approval for research.  When I had managerial responsibility for an academic unit, my aim was for academics to do academic tasks, and for managers and administrators to do managerial/academic tasks.  That’s how I used to explain my former role – in terms of what tasks that previously fell to academics would now fall to me.   Nevertheless, academics were filling in forms and following administrative processes designed and implemented by me.  While that’s not power, it’s responsibility.  I’m giving them things to do which are only instrumentally related to their primary goal of research.  I am contributing to their administrative workload, and it’s down to me to make sure that anything I introduce is justified and proportionate, and that any systems I’m responsible for are as efficient as possible.

So when I hear complaints about ‘administration’ and ‘bureaucracy’ and university managers, whether those complaints are very specific or very general,  I hope I’ll always respond by questioning and checking what I do, and by at least being open to the possibility that the critics have a point.

However, I don’t think most of these complaints are aimed at the likes of me.  Partly because I’ve always had good feedback from academics (though what they say behind my back I have no idea….) but mainly because I’ve always been based in a School or Institute – I’ve never had a role in a central service department.  Thus my work tends to be more visible and more understood.  I have the opportunity to build relationships with academics because we interact on a variety of different issues on a semi-regular basis, which generally doesn’t happen for those based centrally.

And I think it’s those based centrally who usually get the worst flack in these kinds of debates.  I’m not immune from the odd grumble about central service departments myself in the past when I’ve not got what I wanted from them when I want it.  But if I’m honest, I have to accept that I don’t have a good understanding of what it is they do, what their priorities are, and what kinds of pressure they’re under.  And I try to remind myself of that.  I wonder how many people who posted critical comments on Paul’s article would actually be able to give a good account of what (say) the Registry actually does?  I would imagine that relatively few of the academic critics have very much experience of management at any level in a large and complex organisation.

I’m not sure, however, that all of the critics bother to remind themselves of this.  It’s similar to the kinds of complaints about the civil service and the public sector in general.  ‘Faceless bureaucrats’ is an interesting and revealing term – what it really means is that you, the critic, don’t know them and don’t know or understand what it is they do.  ‘Non-job’ is another favourite of mine.  There many sectors that I don’t understand. and which have job titles and job descriptions which make no sense to me, but I’m not so lacking on imagination or so arrogant to assume that that means that they’re “non-jobs”.  In fact, I’d say the belief that there are large groups of administrators – whether in universities or elsewhere – who exist only to make work for themselves and to expand their ’empire’, is a belief bordering on conspiracy theory.  Especially in the absence of evidence.  And extraordinary claims require extraordinary evidence.  That’s not to say that there is no scope for efficiencies, of course, but that’s a different scale of response entirely.

By all means, let’s make sure that non-academic staff keep a relentless focus on the core mission of the university.  Let’s question what we do, and consider how we could reduce the burden on academic staff, and be open to the possibility that the critics have a point.

But let’s not be too quick to denigrate what we don’t understand.  And let’s not mistake ‘Yes Prime Minster’ for a hard-hitting documentary….

The ESRC and “Demand Management”: Part 3 – Submissions and re-submissions

A picture of a boomerangIn the previous post in this series, I said a few things about the increased use of outline application stages and greater use of ‘sifting’ processes to filter out uncompetitive applications before they reach the refereeing stage.  But that’s not the only change taking place straight away.  The new prohibition on “uninvited” resubmissions for the open-call Research Grants scheme has been controversial, and it’s fair to say that it’s not a move that found universal favour in our internal discussions about our institutional response to the ESRC’s second Demand Management consultation.  Having said that, I personally think it’s sensible – which in my very British way is quite high praise.

In recent years I’ve advised against resubmissions on the ground that I strongly suspected that they were a waste of time.  Although they were technically allowed, the guidance notes gave the strong impression that this was grudging – perhaps even to the extent of being a case of yes in principle, no in practice.  After all, resubmissions were supposed to demonstrate that they had been “substantially revised” or some such phrase.

But the resubmissions the ESRC might have wanted presumably wouldn’t need to be “substantially revised” – tightening up perhaps, refocusing a bit, addressing criticisms, that kind of thing.  But “substantially revised”?  From memory, I don’t think an increase or decrease in scale would count.  Am I being unfair in thinking that any proposal that could be “substantially revised” and remain the same proposal (of which more later) was, well,  unfundable, and shouldn’t have been submitted in the first place?  The time and place for “substantially revising” your proposal is surely before submission.

The figures are interesting – apparently banning resubmissions should reduce application numbers by about 7% or so – a significant step in achieving the very ambitious goal of halving the number of applications by 2014.  Of those 7%, 80% are unsuccessful.  A 20% success rate sounds high compared to some scheme averages, but it’s not clear what period of time that figure relates to, nor how it’s split over different schemes.  But even if it was just this last year, a 20% success rate for resubmissions compared to about 15% for first time applications is not a substantial improvement.  We should probably expect resubmissions to be of a higher standard, after all, and that’s not much of a higher standard.

But moving to invited-only resubmissions shouldn’t be understood in isolation.  With very little fanfare, the ESRC have changed their policy on a right to respond to referees’ comments.  They do have a habit of sneaking stuff onto their website when I’m not looking, and this one caught me out a bit.  Previously the right to respond was only available to those asking for more than £500k – now it’s for all Standard Grant applications.  I’m amazed that the ESRC hasn’t linked this policy change more explicitly to the resubmissions change – I’m sure most applicants would happily swap the right to resubmit for the right to respond to referees’ comments.

There are problems with this idea of “invited resubmissions”, though, and I suspect that the ESRC are grappling with them at the moment.

The first problem will be identifying the kinds of applications that would benefit from being allowed a second bite of the cherry.  I would imagine these might be very promising ideas, but which perhaps are let down by poor exposition and/or grant writing – A for ideas, E for execution type applications.  Others might be very promising applications which have a single glaring weakness that could be addressed.  But I wonder how many applications really fall into either of these categories.  If you’re good enough to have a fundable idea, it’s hard to imagine that you’d struggle to write it up, or that it would contain a fixable weakness.  But perhaps there are applications like this, where (for example) a pathways to impact plan in unacceptably poor, or where the panel wants to fund one arm of the project, but not the other.  Clearly the 20% figure indicates that there are at least some like this.

The danger is that the “invited resubmission” might be a runner up prize for the applications that came closest to getting funding but which didn’t quite make it.  But if they’re that good, is there really any point asking for a full resubmission?  Wouldn’t it be better for the ESRC to think about having a repêchage, where a very small number of high quality applications will get another chance in the next funding round.  I’m told that there can be a large element of luck involved in the number, quality, and costs of the competition at each funding meeting, so perhaps allowing a very small number of unsuccessful applications to be carried forward might make sense.  It might mean re-costing because of changed start dates, but I’m sure we’d accept that as a price to pay.  Or we could re-cost on the same basis for the new project dates if successful.

A second problem is determining when an application is a resubmission, and when it’s a fresh application on a related topic.  So far we have this definition:

“a ‘new’ application needs to be substantively different from a previous submission with fresh or significantly modified aims and objectives, a different or revised methodological approach and potentially a different team of investigators. This significant change of focus will be accompanied by a different set of costings to deliver the project. Applications that fall short of these broad criteria and reflect more minor amendments based on peer review feedback alone will be counted as re-submissions.”

Some of my former colleagues in philosophy might appreciate this particular version of the identity problem.  I’ve had problems with this distinction in the past, where I’ve been involved in an application submitted to the ESRC which was bounced back as a resubmission without having the required letter explaining the changes.  Despite what I said last time about having broad confidence in ESRC staff to undertake sifting activities, in this case they got it wrong.  In fairness, it was a very technical economics application with a superficial similarity to a previous application, but you’d have to be an economist to know that.  In the end, the application was allowed as a new application, but wasn’t funded.  That case was merely frustrating, but the ESRC are planning on counting undeclared resubmissions as unsuccessful, with potential sanctions/quota consequences, so we need to get this right.  Fortunately…

The identification of uninvited re-submissions will rest with staff within the ESRC, as is currently the practice. In difficult cases advice will be taken from GAP [Grant Assessment Panel] members. Applications identified as uninvited re-submissions will not be processed and classified as unsuccessful on quality grounds under any sanctions policy that we may introduce.”

Even so, I’d like to see the “further guidance” that the ESRC intend to produce on this.  While we don’t want applicants disguising resubmissions as fresh applications, there’s a danger of a chilling effect which could serve to dissuade genuinely fresh applications on a similar or related topic.  However, I’m heartened to see the statement about the involvement of GAP members in getting this right – that should provide some measure of reassurance.

The ESRC and “Demand Management”: Part 2 – Sifting and Outlines

ESRC office staff start their new sifting role

In part one of this week fortnight long series of posts on the ESRC and “demand management”, I attempted to sketch out some context.  Briefly, we’re here because demand has increased while the available funds have remained static at best, and are now declining in real terms.  Phil Ward and Paul Benneworth have both added interesting comments – Phil has a longer professional memory than I do, and Paul makes some very useful comments from the perspective of a researcher starting his career during the period in question.  If you read the previous post before their comments appeared, I’d recommend going back and having a read.

It’s easy to think of “demand management” as something that’s at least a year away, but there are some changes that are being implemented straight away – this post is about outline applications and “sifting”.  Next I’ll talk about the ban on (uninvited) resubmissions.

Greater use of outline stages for managed mode schemes (i.e pretty much everything except open call Research Grants), for example, seems very sensible to me, provided that the application form is cut down sufficiently to represent a genuine time and effort saving for individuals and institutions, while still allowing applicants enough space to make a case.  It’s also important that reviewers treat outline applications as just that, and are sensitive to space constraints.  I understand that the ESRC are developing a new grading scheme for outline applications, which is a very good thing.  At outline stage, I would imagine that they’re looking for ideas that are of the right size in terms of scale and ambition, and at least some evidence that (a) the research team has the right skills and (b) that granting them more time and space to lay out their arguments will result in a competitive application.

With Standard Grants (now known as Research Grants, as there are no longer ‘large’ or ‘small’ grants), there will be “greater internal sifting by ESRC staff”.  I don’t know if this is in place yet, but I understand that there’s a strong possibility that this might not be done by academics.  I’m very relaxed about that – in fact, I welcome it – though I can imagine that some academics will be appalled.  But…. the fact is that about a third of the applications the ESRC receives are “uncompetitive”, which is a lovely British way of saying unfundable.  Not good enough.  Where all these applications are coming from I’ve no idea, and while I don’t think any of them are being submitted on my watch, it would be an act of extreme hubris to declare that absolutely.  However, I strongly suspect that they’re largely coming from universities that don’t have a strong research culture and/or don’t have high quality research support and/or are just firing off as many applications as possible in a mistaken belief that the ESRC is some kind of lottery.

I’d back myself to pick out the unfundable third in a pile of applications.  I wouldn’t back myself to pick the grant recipients, but even then I reckon I’d get close.  I can differentiate between what I don’t understand and what doesn’t make sense with a fair degree of accuracy, and while I’m no expert on research methods, I know when there isn’t a good account of methods, or when it’s not explained or justified.  I can spot a Case for Support that is 80% literature review and only 20% new proposal.  I can tell when the research questions(s) subtly change from section to section.  And I’d back others with similar roles to me to be able to do the same – if we can’t tell the difference between a sinner and a winner…. why are research intensive universities bothering to employ us?

And if I can do it with a combination of some academic background (MPhil political philosophy) and professional experience, I’m sure others could too, including ESRC staff.  They’d only have to sort the no-hopers from the rest, and if a few no-hopers slip through, or if a few low quality fundable some-hopers-but-a-very-long-way-down-the-lists drop out at that stage, it would make very little difference.  Unless, of course, one of the demand management sanction options is introduced, at which point the notion of  non-academics making decisions that could lead to individual or institutional becomes a little more complicated.  But again, I think I’d back myself to spot grant applications that should not have been submitted, even if I wouldn’t necessarily want a sanctions decision depending on my judgement alone.

Even if they were to go with a very conservative policy of only sifting out applications which, say, three ESRC staff think is dreadful, that could still make a substantial difference to the demands on academic reviewers.  I guess that’s the deal – you submit to a non-academic having some limited judgement role over your application, and in return, they stop sending you hopeless applications to review.

If I were an academic I’d take that like a shot.

Costs of interview transcription: Take a letter, Miss Jones….

A picture of Michelle from "'Allo 'allo"
"Listen verry carefully.... I will say zis anly wance"

Quick post on something other than the ESRC, for a change…..

Transcription is a major category of expense for social science research projects, and I’ve been wondering for some time whether it’s possible to make cost savings without sacrificing accuracy, consistency, confidentiality, speed of turnaround, and all of the other things we require.

One problem is that there seem to be a wide variety of different pricing models.  Some by hour of tape, some by hour of staff time, some by some other smaller unit of time.  Another is that there are different types of transcription – verbatim (which includes every last hesitation and verbal tic) and then varying degrees of near-verbatim stuff.  Some transcription is of fairly straightforward one-on-one interviews, but sometimes it’s whole focus groups or meetings where individual speakers need identifying.  The quality of the recordings and the clarity of those speaking may be variable.  I’ve also been assured that there are cases where a Research Associate with specialist knowledge (rather than a generalist audio typist)  is required, though that was for a video recording.

I imagine there are plenty of models of sourcing transcription across universities – in house capacity, a list of current/former staff looking for extra work, or a contract with a preferred supplier.  Or some kind of mixture of provision.  One option would be to look at getting better value, but given the difficulty in comparing price and quality, I’m not sure how far this would get us.  I’m also a little unhappy at the thought of trying to reduce what I suspect are already fairly low rates of pay.

I wonder if technology has reached a point where it would be worth looking seriously at voice recognition software for producing a first pass transcript.  At least for non-verbatim requirements, this might produce a document that would just need correcting and tidying up, which might be quicker (and therefore cheaper) than transcribing the whole thing.  However, I can’t help remember an episode when a friend tried voice recognition software which couldn’t cope with his Saarrf Lahndahn accent… which got more pronounced the more frustrated he got with its utter failure to anderstan’ wot ee waz sayin.  But I’m sure technology has moved on.

The ever-reliable Wikipedia reckons that 50% of live TV subtitles were produced via voice recognition as of 2005, though there’s a “citation needed” for this claim.  But even if true, I would imagine that a fair amount of speech on live TV is more scripted and rehearsed – and therefore easier to automatically transcribe – than what someone might say in a research interview.  More RP accents, too, I’d imagine.

Anyone have any experience of using voice recognition software for transcription?  Or is the technology not quite there yet?

The ESRC and “Demand Management”: Part 1 – How did we get here?

A picture of Oliver Twist asking for more
Developing appropriate demand management strategies is not a new challenge

The ESRC have some important decisions to make this summer about what to do about “demand management”.  The consultation on these changes closed in June, and I understand about 70 responses were received.  Whatever they come up with is unlikely to be popular, but I think there’s no doubt that some kind of action is required.

I’ve got a few thoughts on this, and I’m going to split them across a number of blog posts over the next week or so.  I’m going to talk about the context, the steps already taken, the timetable, possible future steps, and how I think we in the “grant getting community” should respond.

*          *          *          *          *

According to the presentation that the ESRC presented around the country this spring, the number of applications received has increased by about a third over the last five years.  For most of those five years, there was no more money, and because of the flat cash settlement at the last comprehensive spending review, there’s now effectively less money than before.  As a result, success rates have plummeted, down to about 13% on average.  There are a number of theories as to why application rates have risen.  One hypothesis is that there are just more social science researchers than ever before, and while I’m sure that’s a factor, I think there’s something else going on.

I wonder if the current problem has its roots in the last RAE,   On the whole, it wasn’t good in brute financial terms for social science – improving quality in relative terms (unofficial league tables) or absolute terms was far from a guarantee of maintaining levels of funding.  A combination of protection for the STEM subjects, grade inflation rising standards, and increased numbers of staff FTE returns shrunk the unit of resource.  The units that did best in brute financial terms, it seems to me, were those that were able to maintain or improve quality, but submit a much greater number of staff FTEs.  The unit of assessment that I was closest to in the last RAE achieved just this.

What happened next?  Well, I think a lot of institutions and academic units looked at a reduction in income, looked at the lucrative funding rules of research council funding, pondered briefly, and then concluded that perhaps the ESRC (and other research councils) would giveth where RAE had taken away.

Problem is, I think everyone had the same idea.

On reflection, this may only have accelerated a process that started with the introduction of Full Economic Costing (fEC).  This had just started as I moved into research development, so I don’t really remember what went before it.  I do remember two things, though: firstly, that although research technically still represented a loss-making activity (in that it only paid 80% of the full cost) the reality was that the lucrative overhead payments were very welcome indeed.  The second thing I remember is that puns about the hilarious acronym grew very stale very quickly.

So…. institutions wanted to encourage grant-getting activities.  How did they do this?  They created posts like mine.  They added grant-getting to the criteria for academic promotions.  They started to set expectations.  In some places, I think this even took the form of targets – either for individuals or for research groups.  One view I heard expressed was along the lines of, well if Dr X has a research time allocation of Y, shouldn’t we expect her to produce Z applications per year?  Er…. if Dr X can produce outstanding research proposals at that rate, and that applying for funding is the best use of her time, then sure, why not?  But not all researchers are ESRC-able ideas factories, and some of them are probably best advised to spend at least some of their time, er, writing papers.  And my nightmare for social science in the UK is that everyone spends their QR-funded research time writing grant applications, rather than doing any actual research.

Did the sector as a whole adopt a scattergun policy of firing off as many applications as possible, believing that the more you fired, the more likely it would be that some would hit the target?  Have academics been applying for funding because they think it’s expected for them, and/or they have one eye on promotion?  Has the imperative to apply for funding for something come first, and the actual research topic second?  Has there been a tendency to treat the process of getting research council funding as a lottery, for which one should simply buy as many tickets as possible?  Is all this one of the reasons why we are where we are today, with the ESRC considering demand management measures?  How many rhetorical questions can you pose without irritating the hell out of your reader?

I think the answer to these questions (bar the last one) is very probably ‘yes’.

But my view is based on conservations with a relatively small number of colleagues at a relatively small number of institutions.  I’d be very interested to hear what others think.