I’ve travelled to Cape Town to participate in the first African Academic Integrity Conference. It’s been a great opportunity to catch up with Stella Orim from Coventry and her fascinating research on the attitudes of Nigerian students to plagiarism when studying in the UK. Her evidence is compelling and demonstrates the complexity of this issue; something that is relevant to many international students.
I’ve had a free day pass for the Blackboard Teaching and Learning Conference at Aston U in Birmingham (thanks Blackboard!). I’m particularly interested in this first session on Rubrics. The session is called ‘the good, the bad and the ugly’. It seems like there are still a few kinks in the design, but it does have some functionality that the rubrics in Turnitin don’t. The reporting looks particularly unhelpful and it seems that getting the raw data (for Analytics) is possible, if not straightforward. It also doesn’t seem to be integrated with SafeAssign. We saw a live demo of the rubric and it’s evident that it is possible to ‘tweak’ the decisions within the rubric using a pull-down menu – effectively giving students a high or low decision within a rubric cell. As I argued in my review of ReView, this is something that academics may ask for but it works at cross purposes to one of the key reasons why you would use rubrics: transparency.
It’s interesting to see the Kaltura mashup integration in Bb.
After the break I’ve come to listen to a presentation in DES – Data Exchange System at Liverpool John Moores U. It’s not quite what I thought it would be (that’ll teach me for nit reading the abstracts carefully enough). It seems that it started out as a way of tackling the point-to-point issues of data management within the institution. I have tuned out a bit with all the tech-speak, but have tuned in again for the pedagogical side of things. The focus on retention has really got my attention. The ability to give students earlier access to VLE content (4 weeks prior to induction) is really interesting. They see it as a way f helping students build a sense of belonging and integration with their courses. This is particularly for library resources.
The next session I’ve come to is Helen Parkin and Helen Roger from Sheffield Hallam U about student expectations of Technology Enhanced Learning. The work they have done has enabled them to come up with (they’re calling it ‘inform and validate’) a set of minimum expectations for TEL and ways of supporting staff to achieve them. It’s interesting (to me at least) that 97% of students felt that it was important or very important to find staff contact details on the VLE. This is backed up by the strong expectation that the VLE is thought of as a key communication tool. 99% of respondents felt that it was important or very important to have assessment briefs online and similarly very high expectations to be able to submit online and have their work returned online. This is inline with their previous longitudinal work (done across the ‘noughties’) which showed a similar expectations. There was also a call for a single one-stop location for all of a student’s assessment results across their course. There was a strong push from students for timetabling to be made available via a mobile platform because dealing with frequent or last-minute changes on the go made their live easier. Another strong expectation is for readings lists (which is one of the reasons why MyReadings is so brilliant for us at Huddersfield). The requirement to link to this is passed to academic responsibility at SHU whereas for us it is effectively automated. This led to Helen’s final point regarding the automation of VLE material.
I’ve come down to Plymouth for the day to work with Lisa and Sarah from JISC and Emma from U of Wolverhampton to facilitate a workshop on electronic assessment and feedback with colleagues at Marjon UCP.
Lisa has gone over some of the findings from the baseline analysis that has been undertaken for the assessment and feedback strand. It’s caused me to reflect that one of the reasons we have been so successful in terms of assessment and feedback at the Uni of Huddersfield is because we have managed to make a lot of good decisions about involving students, keeping a focus on authentic design, emphasising timeliness and involving administrative staff throughout the process.
Talking to colleagues here it is once again clear that starting from the administrative environment has been one of the key aspects of our success. Paul Buckley’s principle of teamwork – what I tend to refer to as role clarity – is central.
There is a really clear sense emerging out of discussions today that institutional leadership is vital to the success of an EAM strategy.
It was great to hear from Emma Purnell at U of Wolverhampton about how PebblePad has been developed for EAM purposes. It includes a comment bank which can be shared across course teams and can have feedback forms added. It has Turnitin integration and has a kind of organic, built in early warning system in the form of milestones that haven’t been met.
She talked in some detail about patchwork assessment which is a brilliant example of assessment for (or as) learning. The list of different patchwork ‘blocks’ that have been used is really thought provoking and I can see the applicability of this for colleagues in many different disciplines.
I’ve travelled to the Open University at Milton Keynes for the first UK SoLAR Flare. I’m braced for being bombarded with graphs and have added my fair share on my two slides. I’ve taken the opportunity to present some ideas about assessment analytics and to get a feel for the key areas of interest and debate amongst the movers and shakers in this emerging field. It’s great to put some faces to names, particularly the names of people whose work I’ve been reading.
Simon Buckinghum-Shum kicked off the day with a really helpful overview of the state of play and a reminder of the importance of this whole field. We’ve had a slew of lightning presentations, including my own, with lots of graphs and social network diagrams. As expected there is a heavy emphasis on Social Network Analytics (SNA). So far there has been no mention of assessment data or analytics beyond a passing reference to student grades by Dai Griffiths in his really helpful presentation about the potential dangers of learning analytics, Jean Mutton talking about measuring whether students collected their feedback or not and Chris Ballard’s general mention of ‘grades’ in his presentation from Tribal Labs. This does keep me wondering if I’m missing something or involved in a separate conversation but I continue to be convinced that assessment analytics is a blind spot. It was great to hear from Mark Stubbs about the really powerful things that their work at MMU has been able to find and the important point that he raises is that we need to find better ways to feed this through to students and teachers.
Before lunch we organised ourselves into breakout groups to discuss key themes. I joined a small group looking at dashboards with other groups looking at things like retention and success, engagement and data management. All groups fed back after lunch and it was note resting to hear just how many of the key issues cross over the multiple themes. Mark Stubbs offered a timely warning about ‘can’t count, doesn’t count’ and Simon Buckinghum-Shum talking about the potential for learning analytics to measure process which raises the possibility that we may be able to rely less on end of task assessment results in the future which is really pleasing to hear. Martin Hawksey mentioned the implications that may come with the possible new My Data legislation and for a population that is more data aware. Some of the frustrations and tensions about learning analytics emerged in the discussion after this which was interesting and highlighted the complex and sometimes conflicting demands being placed on it.
It’s clear that this is still an immature field and there is lots still to be done in terms of realising operationalisation.
Cheryl and I travelled to Birmingham yesterday to attend the Assessment and Feedback Project meetings and I’ve stayed on today to share the work of the project with the Learning and Teaching Practice Experts Group Meeting.
On both days it was interesting and useful to find out about the transition that JISC has been moving through in the past year and it was also great to see all the new JISC publications which have just come out to augment the already very useful booklets.
The project meeting was enormously reassuring, reminding Cheryl and me just how much we have achieved and the value of the work that our project has done. It was great to once again connect with Glamorgan and the work they’ve been doing. Our projects are seeking to answer similar questions but our different approaches have produced quite complementary evidence. It was also great to be able to touch base with folks from Manchester Metropolitan U and the opportunity to join forces with the English Department there is really exciting indeed.
The Experts meeting the following day has given the project a fantastic opportunity to showcase our wares. As usual our lovely poster attracted a lot of attention, but beneath that the usual interest in sorting out administrative processes for supporting assessment management were of key interest to colleagues from other institutions. There was also interest in our learning and assessment analytics work. I can’t wait to have some proper graphs to showcase the evidence behind it. Statistics here I come!
I had the pleasure of working with some colleagues in the School of Computing and Engineering yesterday from the music technology subject area. I had been invited to work with them on the topic of audio feedback but, in showing them the new audio function in Grademark I also, inadvertently, wound up showing them Grademark itself. They’d simply not come across it before and therefore didn’t know it was available for them to use.
These colleagues have a particularly complex set of assessments to manage and all of them seemed to be crumbling under the administrative weight if keeping track of it all. When I showed them Grademark their eyes lit up and it was clear that, for some of them at least, this was clearly the answer to a whole pile of problems they were facing. I mentioned that I’d been using it for around five years and one of them was amazed to discover it had been around that long.
I’ve been reflecting on this since and it’s been an interesting reminder that the emotions surrounding eMarking are complex and situated. Whenever I go to talk to folks about eMarking, which I inevitably do when I’m talking about EAM, the concern that is always articulated is the widely known fact that many academics are resistant to the idea. In other words, we know that there are people who, even if they are shown how eMarking can work and the benefits are explained it them, will still not want to do it. We all know this, worry about and try to find ways around it. Yesterday reminded me, however, that there are other folks out there who are desperate for eMarking but who haven’t found or been shown an eMarking solution. When they see it they grab it with both hands and run with it.
Getting the message out there is, of course, vital. But more important is what Paul and I refer to as ‘getting the administrative conditions right’ across the institution so that this works for everyone who is ready for it.
I’ve come to ALT-C for the day to participate in a symposium debating effectiveness and efficiency in assessment and feedback representing the work of the project. I’ll be joining representatives from several other projects in the assessment and feedback.
This post will offer a few passing thoughts in the presentations I attend:
There was a fascinating and somewhat provocative presentation from Bridgend College on the use of Facebook raised a few hackles. It’s interesting to see how it is still generates such anxiety in the ALT community including the concern that it devalues or discredits the institutional VLE. I was concerned with their statement that ‘all students use it’ which I know to simply not be true. A small but significant proportion of my students refuse to use Fb for all sorts of very sound ethical and moral reasons so there’s no way I could require them to use it in the way Bridgend require their students to. i would find it very troubling to require a student who doesn’t want to to use Fb but I have no problem requiring them to use the VLE. Their response to my question was that in the music industry (for which their students are preparing to work) this is the standard and they’ve never had a student who doesn’t use it. I do wonder about its value (in their design) in other disciplines where the use of Fb is less likely to always already be 100%.
The next session offered some fascinating work from Brian Mulligan from the Institute of Technology Sligo and Penn State on open learning badges. The idea of mastery learning is central to this and is something related to the assessment analytics work we’ve been doing in the project. The simple statement that grades don’t guarantee competency is really troubling to the normative discourses of education. This presentation proposes a different way of thinking about this and an infrastructure to support it. Brian asked some provocative questions: is HE a cartel? Why do employers value our qualifications so much? Why might they like badges as an alternative? How might this drive change? What do we need to see to certify that someone can DO something? It could raise employers expectations and could even challenge long standing reputations. We could even stop using degrees. These are questions and ideas that strike to the very heart of the pedagogy on assessment and feedback, not to mention the technologies used to support and facilitate it. It’s clear that trust is at the heart of all of this – which is true of how things stand at the moment. This raises the possibility that employers trust the current qualifications and accreditation system because that’s all that’s available for them to trust. The spectre of this operating as an open market is one about which I’m a little wary. MOOCs were mentioned and I suspect are something of a thread or theme running through this conference. The role this might play in adult learning, work-based learning and simply as a way of shaking up HE is really fascinating. The issue of course and learning coherence and aggregation runs the risk of getting us no further than where we already are (as one of the questioners put it, giving students a rag bag of badges to replace the current poorly articulated learning outcomes within degrees). Another questioner liked the terminology of the ‘democratisation of accreditation’. And the final question was a corker: from someone from the Girl Guides. This was the sort of thing I was interested in asking: related to things like gamification and folksonomy. She actually mentioned that the Girl Guides are interested in introducing digital badges which turn into real badges which sounds fascinating. I’m going to put her in touch with RITH to see if there are some ways they can help each other out. On the whole, I’m with Brian on this one – I would really like to see this succeed.
Brian came to talk to me after our symposium so I was able to share my thoughts with him about how this might connect to assessment analytics. I think this might be worth pursuing not simply because Brian seems to be as iconoclastic as I like to think I am but also because it might bring some interesting new dimensions to the project.
Our symposium seemed to go well. We were certainly kept sternly to time by Marianne (thanks!). It was good to once again hear from the other projects and remember just how many connections there are between them. The questions and discussion from the floor tended to focus on the knotty issues of eSubmission and eMarking which is a shame to a certain extent as the issues to do with the core pedagogy of assessment (which Gunter focussed on) was I think the more interesting issue. But this kind of goes to show just how much of the institutional concern at the moment is getting the mechanics of this right first. Brian’s question to me from the floor was to do with the limits of efficiency. My answer to him was that of course there are limits to the efficiency gains we can make, but I can’t wait to get there! I guess this is at the core of the matter – getting the basic efficiency gains in place is something pretty much everyone is desperate for.
I also had conversations with folks from Manchester and the Open U after the session. I’d like to follow up the suggestion from the OU that the use of tablets and iPads is degrading the quality of marking because the typing is so poor. But this comes back to another point that Brian made – we need to ‘recipe’ for what are the infrastructural needs: is it dual screens? iPads?
I’m rather excited about an invited presented on knitting which is of little relevance to this project but hey – it’s knitting! And it’s started with a bit of social ‘knitworking’ as little bundles of wool get passed around the room. The heart of this is how can we revive the role of coding into computing particularly within schools.
After a very pleasant lunch, spent talking Pebble Pad with folks from Wolverhampton, next was the keynote by Natasa Milic-Frayling. Her paper is about network analysis which, of course, overlaps with the assessment analytics work of this project. Her talk started by exploring different aspects of collaborative learning and the role that technology plays and can play within it. She then turned to consider network analysis by taking us back to 2004 and to UseNet. She mentioned that this was the first time that sociologists had data on human interaction. She talked about the challenge of bringing together the ways that sociologists and computer scientists think about networks together. It really is absolutely fascinating but its usefulness still seems to be limited to sociological rather than pedagogical outcomes. Her final statements were about why it is so important. Ben Shneiderman’s work which explicitly uses this strategy to encourage social participation is getting closest to the where I’m imagining this might be useful pedagogically.
I’ve travelled up to Dundee for the eAssessment Scotland to present on the work of the project.
The day opened up with a keynote from David Boud from the University of Technology Sydney. I’m really excited to hear David speak as I’ve long admired his work. He started by challenging us to think carefully about what feedback really means in the context of assessment in Higher Education, suggesting that simply finding new strategies or trying to do it better isn’t going to solve the problem of feedback.
He suggested that some of the mechanisms we use (eg. Improving turnaround times) aren’t in themselves going to solve the problem. Instead, he proposed, we need to rethink what feedback is – specifically that we should think of it less in terms of input (what teachers do) to output (what students do with it). He said that feedback is one of the few times that the diversity of the student body is connected to the specificity of the curriculum.
David took his inspiration from the epistemological origins of feedback: biology and engineering. What intrigues me is how much of what he is suggesting in his generational models of feedback is just how unthinkable this is in terms of managing the process and the data. Finding ways to gather and channel the information flows is an important part of what he is proposing.
The final layer of the generational change is agentic: putting feedback in the hands of the students. One of the problems he identifies is how students calibrate their own judgement. Here, rather than simply being an adjunct to marking, it is now integral to all learning processes. Self-regulation is central to the process and, he suggested, something that needs to be introduced earlier to shift learning identity. It should be normal, for instance, for us to ask students: ‘what sorts of comments do you want on this piece?’
David turned to consider the theme of the conference: what can technology offer? The ones that stood out for me:
- Quick knowledge of results and calibration of judgement
- Knowledge of what has gone before. What have I told student before? What have other tutors told this student before? If they’ve been told this before, how can I explain it differently because explaining it the same way again is unlikely to work?
(I was really pleased at this point to hear David have a big go at anonymity! He made the point that anonymity is incompatible with the concept of feedback as he has conceptualised it.)
An absolutely superb keynote which really cuts to the heart of what we really need to think about.
In the next session I delivered a workshop on the EBEAM project and got some great questions. One of the delegates made the very important point about audio feedback and accents, with his lovely rich, thick Scottish accent. Accents are the new handwriting! It was great to have the opportunity to discuss these things with Scottish institutions. While they are facing some different challenges, their issues are also largely the same.
After lunch the second keynote was delivered by Russell Stannard from the University of Warwick. I’ve enjoyed Russell presenting before when I was invited to participate in a conference at Harper Adams. He has a real knack for making video feedback seem achievable and accessible. Today he went through the journey he’s been in to develop his practice and is refreshingly prepared to show his starting point and some of the early iterations of the work he has done.
Seeing him this time caused me to reflect on how the proposed developments of the Grademark tool will allow us to do many of the things that he advocates but also automatically returns it to the students. Russell made the point about dyslexic students finding the audio feedback really helpful but it is worth also thinking about whether students on the autistic spectrum might find it more difficult to engage with and interpret than written feedback. He also made the same point Diana Laurillard makes: that using both the auditory and visual channels is useful.
Russell’s stuff is great, but when I think about managing this with a cohort of more than, say, 30 students, my heart sinks. The fact that he manages so much of this through email means that it’s not scaleable to the size where you genuinely get economy of scale. The principles are all sound and exciting, but the administrative load that would come with it is huge. I suspect that technology will overtake his work. The next iteration of the audio tool in Grademark will do much of this. If there is also a channel for students to use to respond, then there is real potential for us to realise the vision that David Boud shared with us this morning.
The next session was a seminar presented by Sue Timmis from the Uni of Bristol and Steve Draper from the Uni of Glasgow. They reported on their research into eAssessment and the different understandings of what it means. Sue took us back to Rowntree’s 17 principles of good assessment. Sue talked about the role that assessment plays in terms of providing students with certificates for future employment as the elephant in the room. As Bloxham and Boyd point out, however, this is one of the four key reasons why we assess and this simply needs to be kept in balance with the other reasons.
Next we heard from Cherry Hopton and her students from Angus College. As Cherry says, what she’s doing isn’t particularly flash but it’s often the simple ideas which are the best. The idea of students making products which they share with each other is one that I subscribe to in my own practice and the benefits they’ve shared resonate with my own. Hearing from her students about their experience of using Facebook was really powerful. Their example of inter-cohort communication is, as I’ve discovered in my use of twitter, one of social networking. The fact that previous cohorts can send things through to and be in conversation with current students is brilliant.
I’ve had some very exciting talks with colleagues recently about where we might take learning analytics within the institution and what role Assessment Analytics might play in this. In particular, I’m hoping that we might be able to do some ‘proof of concept’ analysis of some of our data alongside data from the Library Impact Data Project. You can find their blog here.