All posts by Bryony Ramsden

Focus group analysis

The focus group analysis has just been released to each individual collaborating institution.  The groups were designed to pull out additional advising data on usage of library resources and facilities, asking students how much they used library facilities and resources, where they chose to use the resources, any difficulties they experienced, and whether the library satisfied their information and learning space requirements.

Students volunteered with a small reimbursement for their time and involvement, with varying success at each institute (if you’ve been following the blog, you’ll have already seen De Montfort’s focus group discussion), but resulting in a huge amount of data to analyse!

The coding process involved reading through transcripts to bring out broad themes, and refining the themes into smaller groups where applicable.  Transcripts were then re-read for the analysis itself, with the aim to not just code them, but to use thematic clues to develop and elaborate on what students discussed.  For example, a student discussing problems they had encountered using a resource may simultaneously be  indicating non-verbally that their student group could benefit from more in-depth information literacy training, or that there could be improved subscription options for that subject area.

Analysis was also based around frequency of mentions: the more often a code or theme was discussed, the more important an element it represented in student library use/non-use.  This method can be problematic in that it doesn’t always demonstrate emphasis and enthusiasm materialising in the group discussion, or indeed can be heavily influenced by current issues the students are experiencing, but it does still demonstrate what is important to the participant at that time and thus what is meaningful to them.  Additionally, when used in combination with other codes and the analysis technique above, it can result in a revealing image of student experiences and usage, and provide material to lead further research at a later date if appropriate.

Reflections on Huddersfield’s data

Following on from De Montford’s blog post about the nature of their data submission, we’ve been thinking a bit more about what we could have included (and indeed what we might look at when we finish this project).

We’ve already been thinking about how we could incorporate well established surveys into data consideration (both our own internal data collection, such as our library satisfaction survey, and external surveys).  While our biggest concern is getting enough data to draw conclusions, qualitative data is naturally a problematic area: numerical data ‘just’ needs obtaining and clearing for use, but getting some information from students to find out why they do or don’t use resources and the library can be quite complicated.  Using other surveys outside of the project focus groups could be a way of gathering simple yet informative data to indicate trends and personal preferences.  Additionally, if certain groups of students choose to use the library a little or a lot, existing surveys may give us feedback on why on a basic level.

We also may want to ask (and admittedly I’m biased here given my research background!) what makes students choose the library for studying and just how productive they are when they get here.  Footfall has already clearly demonstrated in the original project that library entries do not necessarily equate to degree results.  Our library spaces have been designed for a variety of uses, for social learning, group study, individual study, specialist subject areas.  However, that doesn’t mean they are used for those purposes.  Footfall can mean checking email and logging on to Facebook (which of course then links back to computer log in data and how that doesn’t necessarily reflect studying), but it can also mean intensive group preparation e.g. law students working on a moot (perhaps without using computers or resources other than hard copy reference editions of law reports).

If we want to take the data even further, we could take it deeper into borrowing in terms of specific collection usage too.  Other research (De Jager, K (2002) has found significant correlations between specific hard copy collections (in De Jager’s case, examples include reference materials and short loan items) and attainment, with similar varying relationships between resource use and academic achievement across different subjects.  If we were to break down collection type in our borrowing analysis (particularly where there may be special collections of materials or large numbers of shorter loan periods), would we find anything that would link up to electronic resource use as a comparison?  We could also consider incorporating reading lists into the data to check whether recommended texts are used heavily in high attainment groups…

De Jager, K. (2002), “Successful students: does the library make a difference?” Performance Measurement and Metrics 3 (3), p.140-144

What will this project do for library users?

The project aims to make some pretty big conclusions by the end of the data analysis about library usage and attainment, but what can we actually do with this information once we’ve got proof?  What use is it to our customers? 

We’ve got two main groups of library users; staff and students.  We aim to use our quantitative data to pinpoint groups of students who have a particular level of attainment.  We’ll work with staff in order to improve poor scores and learn from those who are awarded high scores, regardless of whether they are high or low users of our resources and facilities.  Focus groups held now, and most likely regularly in the future, will tell us more about people who use the library resources less but achieve good degree results.  If the materials we are providing aren’t what students want to use, we can tailor our collections to reflect their needs as well as ensure they get the right kind of information their tutors want them to use.

The student benefits are pretty obvious – the more we can advise and communicate to them and encourage use of library staff, and electronic and paper resources, the more likely they are to get a good degree and get value from their time (and money!) spent at university.  Once again we state here that we are aware of other factors in student attainment, but a degree is not achieved without having some knowledge of the subject, and we help supplement the knowledge communicated by lecturers. 

Students get value for money and hopefully enjoy their university experience, lecturers ensure students get the right kind of support and materials they need, and we make sure our budget is used appropriately.  Pretty good, huh?

The legal stuff…

One of the big issues for the project so far has been to ensure we are abiding to legal regulations and restrictions.  The data we intend to utilise for our hypothesis is sensitive on a number of levels, and we have made efforts to ensure there is full anonymisation of both students and universities (should our collaborators choose to remain so).  We contacted JISC Legal prior to data collection to confirm our procedures are appropriate, and additionally liaised with our Records Manager and the University’s legal advisor.

Our data involves tying up student degree results with their borrowing history (i.e. the number of books borrowed), the number of times they entered the library building, and the number of times they logged into electronic resources.  In retrieving data we have ensured that any identifying information is excluded before it is handled for analysis.  We have also excluded any small courses to prevent identification of individuals e.g. where a course has less than 35 students and/or fewer than 5 of a specific degree level.

To notify library and resource users of our data collection, we referred to another data project, EDINA, which provides the following statement for collaborators to use on their webpages:

“When you search for and/or access bibliographic resources such as journal articles, your request may be routed through the UK OpenURL Router Service (openurl.ac.uk), which is administered by EDINA at the University of Edinburgh.  The Router service captures and anonymises activity data which are then included in an aggregation of data about use of bibliographic resources throughout UK Higher Education (UK HE).  The aggregation is used as the basis of services for users in UK HE and is made available to the public so that others may use it as the basis of services.  The aggregation contains no information that could identify you as an individual.”

Focus groups have also been conducted with a briefing and a consent form to ensure participants are fully aware of data use from the group and of their anonymisation and advising them that they can leave the group at any point.

Hypothesis musings.

Since the project began, I’ve been thinking about all the issues surrounding our hypothesis, and the kind of things we’ll need to consider as we go through our data collection and analysis.

For anyone who doesn’t know, the project hypothesis states that:

“There is a statistically significant correlation across a number of universities between library activity data and student attainment”

The first obvious thing here is that we realise there are other factors in attainment!  We do know that the library is only one piece in the jigsaw that makes a difference to what kind of grades students achieve.  However, we do feel we’ll find a correlation in there somewhere (ideally a positive one!).  Having thought about it beyond a basic level of “let’s find out”, the more I pondered, the more extra considerations leapt to mind!

Do we need to look at module level or overall degree?  There are all kinds of things that can happen that are module specific, so students may not be required to produce work that would link into library resources, but still need to submit something for marking.  Some modules may be based purely on their own reflection or creativity.  Would those be significant enough to need noting in overall results?  Probably not, but some degrees may have more of these types of modules than others, so could be worth remembering. 

My next thought was how much library resource usage counts as supportive for attainment.  Depending on the course, students may only need a small amount of material to achieve high grades.  Students on health sciences/medicine courses at Huddersfield are asked to work a lot at evidence based assignments, which would mean a lot of searching through university subscribed electronic resources, whereas a student on a history course might prefer to find primary sources outside of our subscriptions. 

On top of these, there all kinds of confounding factors that may play with how we interpret our results:

  • What happens if a student transfers courses or universities, and we can’t identify that?
  • What if teaching facilities in some buildings are poor and have an impact on student learning/grades?
  • Maybe a university has facilities other than the library through the library gates and so skews footfall statistics?
  • How much usage of the library facilities is for socialising rather than studying?
  • Certain groups of students may have an impact on data, such as distance learners and placement students, international students, or students with any personal specific needs.  For example some students may be more likely to use one specific kind of resource a lot out of necessity.  Will they be of a large enough number to skew results?
  • Some student groups are paid to attend courses and may have more incentive to participate in information literacy related elements e.g. nurses, who have information literacy classes with lots of access to e-resources as a compulsory part of their studies.

A key thing emerging here is that lots of resource access doesn’t always mean quality use of materials, critical thinking, good writing skills…  And even after all this we need to think about sample sizes – our samples are self-selected, and involve varying sizes of universities with various access routes to resources.  Will these differences between institutions be a factor as well?

All we can do for now is take note of these and remember them when we start getting data back, but for now I set to thinking about how I’d revise the hypothesis if we could do it again, with a what is admittedly a tiny percentage of these issues considered within it:

“There is a statistically significant correlation between library activity and student attainment at the point of final degree result”

So it considers library usage overall, degree result overall, and a lot of other factors to think about while we work on our data!

Notes from the first meeting 11.03.11

The group had its very first meeting on Friday the 11th, and it was a full house – almost all the group members managed to make it to Huddersfield, and were greeted with hot cross buns and biscuits a-plenty.  

Introductions were made, and the meeting kicked off with Dave Pattern providing an overview to the background of the project.  The germ of an idea began when the library started investigating the kind of people who were using the library, looking at an overall picture rather than something specifically course based.  However, it became obvious that there were certain courses who used the library a lot, and some who barely entered, if at all.  Creating a non/low usage group within the library at Huddersfield gave the team a chance to focus on targeting specific groups to examine use in more detail, but never created a statistically sound basis to make assumptions, and so the LIDP was conceived! 

Graham Stone, the project manager, went through the project documentation and how information is to be disseminated via the blog (with comments welcome from all project members), and reminded members that we don’t consider a positive correlation between library use and attainment to be a causal relationship!  The group is very aware of other factors that come into attainment and is by no means suggesting that library use is the only element of importance!  Data protection and ethical issues were considered, keeping in mind pending information from Huddersfield’s legal advisor.

 Graham asked for volunteers to join a project steering group based at Huddersfield (taking travel distance into consideration!), and it was agreed that Salford would have a representative join the group (a blog post dedicated to the steering group is coming soon).

Bryony Ramsden, the project research assistant, talked about issues that might disrupt the hypothesis (see the main hypothesis blog post), and introduced the idea of running focus groups.  Some qualitative data would help explain exactly why some people use the library a huge amount, and some don’t, and help discover why discrepancies between courses might develop.  Samples would ideally be a mixture of student types, covering the main groups of undergraduates and postgraduates both full and part time across various schools/bodies.  Groups will need to run soon to ensure students aren’t disrupted too much before exams and assignment due dates begin to take up their time, and having found term differences between institutions already the plan was modified from running groups in April and May to over March and April!  Data collection could end up running a little tight here, but a move forward could actually be beneficial to all parties if the data is ready earlier than planned.

Dave talked about data collection and emphasised that he realises not all institutions will be able to provide all same sets of data types.  He talked through different routes of accessing data to maximise what could be available with a minimum of difficulty.  He offered a number of options for passing the data back to him (SQL, Excel, or he can provide coding to help if required), with at least data from academic year of 2009/10.  Concerns were expressed that because of variations in graduation dates data may not cover a full academic year, but if these courses are flagged up there may be potential for comparison between like courses.  Dave said he’ll create a document detailing the systems of each institution so that he can offer advice easily on data gathering, and reminded everyone that if they have any other data they think might be useful, he’ll welcome suggestions.  Data encryption issues were discussed to emphasise the data protection issues raised in the exchange process.  Data should be submitted to Dave by 23rd April.

Having discussed all the core important elements to get things moving, the group went their separate ways, some to trains and car journeys, others to the pub (the Head of Steam, right on the train platform for convenience…).