ALT-C Day 2

I’ve come to ALT-C for the day to participate in a symposium debating effectiveness and efficiency in assessment and feedback representing the work of the project. I’ll be joining representatives from several other projects in the assessment and feedback.

This post will offer a few passing thoughts in the presentations I attend:

There was a fascinating and somewhat provocative presentation from Bridgend College on the use of Facebook raised a few hackles. It’s interesting to see how it is still generates such anxiety in the ALT community including the concern that it devalues or discredits the institutional VLE. I was concerned with their statement that ‘all students use it’ which I know to simply not be true. A small but significant proportion of my students refuse to use Fb for all sorts of very sound ethical and moral reasons so there’s no way I could require them to use it in the way Bridgend require their students to. i would find it very troubling to require a student who doesn’t want to to use Fb but I have no problem requiring them to use the VLE. Their response to my question was that in the music industry (for which their students are preparing to work) this is the standard and they’ve never had a student who doesn’t use it. I do wonder about its value (in their design) in other disciplines where the use of Fb is less likely to always already be 100%.

The next session offered some fascinating work from Brian Mulligan from the Institute of Technology Sligo and Penn State on open learning badges. The idea of mastery learning is central to this and is something related to the assessment analytics work we’ve been doing in the project. The simple statement that grades don’t guarantee competency is really troubling to the normative discourses of education. This presentation proposes a different way of thinking about this and an infrastructure to support it. Brian asked some provocative questions: is HE a cartel? Why do employers value our qualifications so much? Why might they like badges as an alternative? How might this drive change? What do we need to see to certify that someone can DO something? It could raise employers expectations and could even challenge long standing reputations. We could even stop using degrees. These are questions and ideas that strike to the very heart of the pedagogy on assessment and feedback, not to mention the technologies used to support and facilitate it. It’s clear that trust is at the heart of all of this – which is true of how things stand at the moment. This raises the possibility that employers trust the current qualifications and accreditation system because that’s all that’s available for them to trust. The spectre of this operating as an open market is one about which I’m a little wary. MOOCs were mentioned and I suspect are something of a thread or theme running through this conference. The role this might play in adult learning, work-based learning and simply as a way of shaking up HE is really fascinating. The issue of course and learning coherence and aggregation runs the risk of getting us no further than where we already are (as one of the questioners put it, giving students a rag bag of badges to replace the current poorly articulated learning outcomes within degrees). Another questioner liked the terminology of the ‘democratisation of accreditation’. And the final question was a corker: from someone from the Girl Guides. This was the sort of thing I was interested in asking: related to things like gamification and folksonomy. She actually mentioned that the Girl Guides are interested in introducing digital badges which turn into real badges which sounds fascinating. I’m going to put her in touch with RITH to see if there are some ways they can help each other out. On the whole, I’m with Brian on this one – I would really like to see this succeed.

Brian came to talk to me after our symposium so I was able to share my thoughts with him about how this might connect to assessment analytics. I think this might be worth pursuing not simply because Brian seems to be as iconoclastic as I like to think I am but also because it might bring some interesting new dimensions to the project.

Our symposium seemed to go well. We were certainly kept sternly to time by Marianne (thanks!). It was good to once again hear from the other projects and remember just how many connections there are between them. The questions and discussion from the floor tended to focus on the knotty issues of eSubmission and eMarking which is a shame to a certain extent as the issues to do with the core pedagogy of assessment (which Gunter focussed on) was I think the more interesting issue. But this kind of goes to show just how much of the institutional concern at the moment is getting the mechanics of this right first. Brian’s question to me from the floor was to do with the limits of efficiency. My answer to him was that of course there are limits to the efficiency gains we can make, but I can’t wait to get there! I guess this is at the core of the matter – getting the basic efficiency gains in place is something pretty much everyone is desperate for.

I also had conversations with folks from Manchester and the Open U after the session. I’d like to follow up the suggestion from the OU that the use of tablets and iPads is degrading the quality of marking because the typing is so poor. But this comes back to another point that Brian made – we need to ‘recipe’ for what are the infrastructural needs: is it dual screens? iPads?

I’m rather excited about an invited presented on knitting which is of little relevance to this project but hey – it’s knitting! And it’s started with a bit of social ‘knitworking’ as little bundles of wool get passed around the room. The heart of this is how can we revive the role of coding into computing particularly within schools.

After a very pleasant lunch, spent talking Pebble Pad with folks from Wolverhampton, next was the keynote by Natasa Milic-Frayling. Her paper is about network analysis which, of course, overlaps with the assessment analytics work of this project. Her talk started by exploring different aspects of collaborative learning and the role that technology plays and can play within it. She then turned to consider network analysis by taking us back to 2004 and to UseNet. She mentioned that this was the first time that sociologists had data on human interaction. She talked about the challenge of bringing together the ways that sociologists and computer scientists think about networks together. It really is absolutely fascinating but its usefulness still seems to be limited to sociological rather than pedagogical outcomes. Her final statements were about why it is so important. Ben Shneiderman’s work which explicitly uses this strategy to encourage social participation is getting closest to the where I’m imagining this might be useful pedagogically.

EAssessment Scotland 2012

I’ve travelled up to Dundee for the eAssessment Scotland to present on the work of the project.

The day opened up with a keynote from David Boud from the University of Technology Sydney. I’m really excited to hear David speak as I’ve long admired his work. He started by challenging us to think carefully about what feedback really means in the context of assessment in Higher Education, suggesting that simply finding new strategies or trying to do it better isn’t going to solve the problem of feedback.

He suggested that some of the mechanisms we use (eg. Improving turnaround times) aren’t in themselves going to solve the problem. Instead, he proposed, we need to rethink what feedback is – specifically that we should think of it less in terms of input (what teachers do) to output (what students do with it). He said that feedback is one of the few times that the diversity of the student body is connected to the specificity of the curriculum.

David took his inspiration from the epistemological origins of feedback: biology and engineering. What intrigues me is how much of what he is suggesting in his generational models of feedback is just how unthinkable this is in terms of managing the process and the data. Finding ways to gather and channel the information flows is an important part of what he is proposing.

The final layer of the generational change is agentic: putting feedback in the hands of the students. One of the problems he identifies is how students calibrate their own judgement. Here, rather than simply being an adjunct to marking, it is now integral to all learning processes. Self-regulation is central to the process and, he suggested, something that needs to be introduced earlier to shift learning identity. It should be normal, for instance, for us to ask students: ‘what sorts of comments do you want on this piece?’

David turned to consider the theme of the conference: what can technology offer? The ones that stood out for me:
– Quick knowledge of results and calibration of judgement
– Knowledge of what has gone before. What have I told student before? What have other tutors told this student before? If they’ve been told this before, how can I explain it differently because explaining it the same way again is unlikely to work?
(I was really pleased at this point to hear David have a big go at anonymity! He made the point that anonymity is incompatible with the concept of feedback as he has conceptualised it.)

An absolutely superb keynote which really cuts to the heart of what we really need to think about.

In the next session I delivered a workshop on the EBEAM project and got some great questions. One of the delegates made the very important point about audio feedback and accents, with his lovely rich, thick Scottish accent. Accents are the new handwriting! It was great to have the opportunity to discuss these things with Scottish institutions. While they are facing some different challenges, their issues are also largely the same.

After lunch the second keynote was delivered by Russell Stannard from the University of Warwick. I’ve enjoyed Russell presenting before when I was invited to participate in a conference at Harper Adams. He has a real knack for making video feedback seem achievable and accessible. Today he went through the journey he’s been in to develop his practice and is refreshingly prepared to show his starting point and some of the early iterations of the work he has done.

Seeing him this time caused me to reflect on how the proposed developments of the Grademark tool will allow us to do many of the things that he advocates but also automatically returns it to the students. Russell made the point about dyslexic students finding the audio feedback really helpful but it is worth also thinking about whether students on the autistic spectrum might find it more difficult to engage with and interpret than written feedback. He also made the same point Diana Laurillard makes: that using both the auditory and visual channels is useful.

Russell’s stuff is great, but when I think about managing this with a cohort of more than, say, 30 students, my heart sinks. The fact that he manages so much of this through email means that it’s not scaleable to the size where you genuinely get economy of scale. The principles are all sound and exciting, but the administrative load that would come with it is huge. I suspect that technology will overtake his work. The next iteration of the audio tool in Grademark will do much of this. If there is also a channel for students to use to respond, then there is real potential for us to realise the vision that David Boud shared with us this morning.

The next session was a seminar presented by Sue Timmis from the Uni of Bristol and Steve Draper from the Uni of Glasgow. They reported on their research into eAssessment and the different understandings of what it means. Sue took us back to Rowntree’s 17 principles of good assessment. Sue talked about the role that assessment plays in terms of providing students with certificates for future employment as the elephant in the room. As Bloxham and Boyd point out, however, this is one of the four key reasons why we assess and this simply needs to be kept in balance with the other reasons.

Next we heard from Cherry Hopton and her students from Angus College. As Cherry says, what she’s doing isn’t particularly flash but it’s often the simple ideas which are the best. The idea of students making products which they share with each other is one that I subscribe to in my own practice and the benefits they’ve shared resonate with my own. Hearing from her students about their experience of using Facebook was really powerful. Their example of inter-cohort communication is, as I’ve discovered in my use of twitter, one of social networking. The fact that previous cohorts can send things through to and be in conversation with current students is brilliant.

20120831-145853.jpg

Learning Analytics chats

I’ve had some very exciting talks with colleagues recently about where we might take learning analytics within the institution and what role Assessment Analytics might play in this. In particular, I’m hoping that we might be able to do some ‘proof of concept’ analysis of some of our data alongside data from the Library Impact Data Project. You can find their blog here.

Joining forces with research in the Netherlands

One of the exciting things that happened at the 5th International Plagiarism Conference was that Cheryl and I had the opportunity to meet with Patris Van Boxel from the Vrije Universiteit in the Netherlands. She has just received some funding, along with colleagues at several other Dutch Universities, to pursue similar evaluation of Grademark as we are undertaking in this project. You can see her presentation at the Blackboard Conference below. We look forward to sharing insights with them in the near future.

5th International Plagiarism Conference: Day two keynotes

The audience is bright eyed and bushy tailed (no really!) after the fantastic conference dinner last night and awaiting the first keynote of the day.

Tara Brabazon from the U of Bolton started the day exploring the intellectual and automated world in which we now find ourselves. Her point about the rather inappropriate strategy we take at induction mirror’s Peter Hartley’s point that our overly-anxious induction strategies do little more than tell students that plagiarism is something that will benefit them, tells them how to do it and that they are likely to get away with it. The agenda for her talk is to discuss what is not being discussed and talked about. She names plagiarists as modern-day folk devils. There’s certainly growing evidence that plagiarism is getting a high profile with several recent public cases (particularly in Germany, Romania and Hungary). Tara argued that information literacy is an important factor in all of this. She cited research conducted by JISC about the different reading practices that are undertaken on screen as opposed to paper. I’m surprised by this and am interested in looking into this research further. While she advocated paper-based marking as substantively different to online marking, she then complained that students didn’t collect the paper-based feedback over which she had laboured. She speculated that they were only interested in collecting their mark and not their feedback. However, the dislocation of these two things (offering a mark online and feedback on paper) is surely a more significant cause. This simply hasn’t been my experience: students are very keen, if not desperate for their feedback. Joining these two things together and finding ways to get them to ‘talk to’ each other is, as this project is discovering, one of the key benefits of online marking.

The target of her concerns turned to assessment design and she offered a number of strategies but I can’t help wondering about the resource implications of her suggestions. I’m not at all convinced these are scalable to the large classes many of us are now having to deal with. It’s interesting that her observation of the different level of ability between Australian and British students mirrors my own experience. It was great to see the University of Wollongong‘s StartSmart program (the new version of ILIP) as a good example of this.

I have had the honour of being invited to offer the summary at the end of the conference. Some of the sessions I had signed up to had been cancelled so I used the time to put my thoughts together. I did, however, have the pleasure of chairing a session delivered by Radhika Iyer-O’Sullivan with the compelling title ‘I can’t say it any better’. She shared things from her perspective as a writing and EAP tutor at the British University in Dubai.

Kirby Ferguson took us in a different direction by starting from the ‘Everything is a Remix’ point of departure. He explored some of the myths of creativity (going way back in history from the Englightenment through the Romantics to Modernism). He covered a huge amount of territory including Chic, Daft Punk, Bob Dylan, Richard Prior, Hunter S. Thompson, showing that what we consider to be ‘plagiarism’ is vastly complex. The very compelling argument that a great deal of what we do in any kind of creative or intellectual endeavour is about adding to, building upon and transforming that which already exists. The steam engine, the QWERTY keyboard, the lightbulb and countless other technologies wouldn’t exist without this strategy. Inventions such as Fordism, printing and the world wide web all came into being by merging and bringing together existing technologies and ideas in new ways. Copy, Transform and Combine are the three layers of this. This notion of remixing has plenty of implications on what we consider to be plagiarism of course.

His powerful point that the lone creator and that ideas are property is changing and different to how we used to think about it. His argument that it is linked to money is something that has resonated in several conversations I’ve had elsewhere in this conference. Obviously this left us all with questions about what all of this might mean for us and Tracey Bretag asked the question on all of our minds and, no surprisingly, Kirby said that he doesn’t have a great answer to that. But his answer was great: that it’s about honesty and transparency. He also acknowledged that he is talking from a very different context to us.

She offered some interesting accusations about poor teaching and assessment strategies and made an impassioned plea for all academic teaching staff to be qualified to doctoral level (and therefore to be active researchers) and to be trained teachers. The call for ongoing professional development, a policy the HEA is now pursuing with fervour, is surely an answer to the situation we now find ourselves in in the academy.

After (a rather delicious) lunch we were joined by Professor Jonathan Zittrain from Harvard University via video link from Cambridge MA. He took us through a journey which was very much from the student perspective: tackling the very real problem of filling in the blank sheet of paper that faces them. What I really enjoyed hearing in this paper was a very real and reasonable consideration of the emotional side of this from the student perspective. The difference between the extrinsic and intrinsic motivation, he argued, is very similar for students and tutors. They are extrinsically motivated to get their work in to us, we are extrinsically motivated to mark it and get it back to them. The idea of moving this to a more intrinsically motivated system. The idea of ‘caring’ is central to this and is, of course, a really powerful emotion.

He shared some interesting new developments such as Amazon Mechanical Turk which is intriguing. I was really happy to hear him talk so positively about Wikipedia as a place where very lively academic discussion takes place and where the contributors are strongly intrinsically motivated. His encouragement to incorporate this kind of authoring into our assessment design is to be positively welcomed. He used the example of the removal of the ‘By’ option on the Creative Commons license as an example of just how valuable attribution is to the vast majority of people who contribute artifacts to the digital world. This is, he suggests, a really useful ‘hook’ to use to discuss the issue of attribution and respect. His point here is that even in this world where people are sharing things openly, they nevertheless care very much about provenance and attribution.

He talked about the impact that the role that academic publishing plays in all of this and conversely the impact that this is having on scholarly publishing, arguing that it is a “collective abdication for using citation as ways of deciding academic merit”. His example of the Amazon case relating to George Orwell’s 1984 was truly extraordinary. I was particularly intrigued by his consideration of the library as moving from a fortress to a kind of ‘hub’ for this new digital world. He called for it to be a ‘curator with a sense for the material that is there and enough custodial interest in it’.

5th International Plagiarism Conference: Day 1 sessions and workshops

The first morning session was entitled ‘Can we rely on text originality check systems’ delivered by three colleagues from Stockholm U. They started by exploring the background to their study by offering national statistics for Sweden which showed that plagiarism is the dominant problem in terms of student discipline issues. Their research was considered in terms of a national procurement strategy for a text-matching system which included two national tools. It’s interesting to note what this research didn’t consider the role it might play in a larger assessment management strategy. They compared GenuineText, Urkund and Turnitin; the first two of these are Swedish systems. Their findings were that Turnitin was considerably faster than the other two in terms of simply returning the reports. They also compared the number of matches found by discipline area. Again, Turnitin performed best but even it found less than half of the references in the text. This is a reminder that these tools can only ever be expected to return partial not complete matches and should be used accordingly although, as the presenters acknowledge this may be because some of the references tested were in Swedish. This reminds us that these tools are only ever useful in a holistic approach to academic integrity. The research also considered the ease of use and it was interesting to see screen grabs of the two Swedish tools with which I was unfamiliar. Their findings were that Turnitin came out best against their measures.

After a refreshing cup of tea, I headed down stairs to the session on a ‘Phenomenographic exploration of the perception of plagiarism’ presented by Stella Orim. Teddi Fishman who is chairing the session introduced it in terms of students who get constructed as ‘other’ within discussions around plagiarism. This is an interesting reminder of how this work overlaps with critical race pedagogy and theory. It also brought to mind Tracey Bretag‘s mention of international students which reminded me that what constitutes an international student is by no means stable (an Australian student studying in the UK is an international student after all). The study considered students studying a postgraduate degree in the UK who had completed an undergraduate degree in Nigeria. She identified six themes:

  1. a lack of prior awareness
  2. understanding of the concept of plagiarism
  3. a fear of the concept
  4. the level of importance given to it at their previous institution
  5. an institutional system being in place in their previous institution
  6. possible ways of mitigating the concept in their previous institution

It was fascinating to hear her findings and particularly what it tells us that we need to do in the UK HE sector to provide appropriate support and guidance to students who come to study with us from Nigeria. It also makes some very valuable suggestions for the Nigerian government in terms of providing support to students prior to travelling overseas to study. She has identified the fear that is generated in students upon coming to the UK for study and the impact that this has on their learning.

Next up was Xiaodong Yang exploring the epistemological and etymological origins of the term ‘plagiarism’. It’s interesting to see postmodernism, postcolonialism and hermeneutics as his theoretical framework. In combination these powerfully unsettle what we might understand plagiarism to mean. The subjects for this study were all from a Confucian heritage who had all been charged with plagiarism.

In the last session before lunch I had the pleasure of chairing a session delivered by Anwar Amjad from the Higher Education Commission in Pakistan on ‘User Acceptance of Turnitin application in Pakistan HEIs’. He started by sharing the development strategy which is being pursued in Pakistan and the strategic direction of the HEC. He indicated that the provision of new digital resources has had an impact on the incidence of plagiarism in HEIs. It’s interesting to see how videoconferencing has been used to provide training. This study used the Technology Acceptance Model (TAM) to measure the perceived usefulness and perceived ease of use of Turnitin. It’s interesting to see that while the perception of the usefulness and ease of use was quite high, the actual use remained quite low. It’s possible from that streamlining the workflow for academic staff might be an important factor in this.

After the second keynote I had the pleasure of chairing another session delivered by Prof Peter Taylor from the Open U. He opened with an overview of the institution and the very particular problems that their distance learning and open access model presents. Later it became clear that their scale is also significant. He then took us through the journey they’ve been on in their institution, particularly in terms of putting policy into practice. The context in 2007 was probably pretty similar to other institutions here in the UK if not elsewhere. The problems of detecting it were quite small in comparison to the others that came after that in terms of prosecution and penalties which reflected Virginia Barbour’s observations in her keynote. One thing that might be slightly different to other institutions is the OU’s very centralised structure, something that almost certainly is a real strength when it comes to things like plagiarism policy. He shared the website that they’ve developed and their open access materials on the learning space. The problems they’ve encountered are very similar to those experienced elsewhere in terms of consistency, reporting, tutors turning a blind eye etc. It’s interesting that they’ve used CopyCatch and Turnitin in combination because of their different strengths. It’s great to hear someone talking about the resource implication of all this (particularly in terms of things like turnaround time) and Mantz Yorke’s efficacy/efficiency balance comes to mind. The volumes they’re dealing with at the OU prove that managing this from an ‘economy of scale’ perspective is vital. The low incidence rate is also pretty reassuring (less than 1%). I like the idea of the buddy system to mentor and support new academic conduct officers and setting performance standards.

Erica Morris from the HEA finished off the day with a workshop in designing out plagiarism. Her focus was in institutional strategies and she even had a nice picture of our campus on one of her slides.

20120717-160253.jpg

Thinking about assessment for learning is, she argues, central to any successful strategy for designing out plagiarism. She very helpfully considered this in terms of the ‘what’, the ‘how’ and the ‘when’ of assessment. I would want to add the ‘why’ as well, drawing on Bloxham and Boyd’s work.

5th International Plagiarism Conference: Day 1 keynotes

The hall in the Sage Gateshead is packed with people who have gathered from all over he world to consider the issue of plagiarism. I’m here to consider what role EAM might play in this.

The day has kicked off with Will Murray taking us on a trip down memory lane with his opening address. He reminded us of where Plagiarism Advice started and more importantly why. For me it’s providing a useful reminder of the scholarly origins of the Turnitin in the UK context. It’s also a reminder of just how complex and multifaceted plagiarism and academic integrity is.

He then handed over to Craig Mahoney the Chief Executive of the HEA who offered a personal and historical welcome to Newcastle and Gateshead. He has offered a powerful reminder of the fact that academic integrity is a moral code and that the implications of falling foul of this are serious and significant. He offered a powerful reminder that detection tools should only ever be part of a holistic approach to academic integrity. In doing so he covered a huge amount of ground which clearly showed just how complex and complicated the assessment, feedback and academic integrity ‘landscape’ is.

After lunch Virginia Barbour shared the perspective from the editor’s desk. She shared a ‘staircase’ of ethics to do with academic publishing which includes things like falsification, fabrication and, of course, plagiarism. Amongst many other bits of data she shared some astonishing statistic that 14% of respondents were aware of misconduct by others. Her suggestion for appropriate steps we need to take to over come the barriers are

She also advocates for using technology as part of this strategy: Cross Check powered by iThenticate. The subsequent issue is what editors do with or about it when it is found, particularly dealing with false positives and the workload issue which is something that this project is, obviously, interested in. She shared the origins on COPE – the Committee on Publication Ethics. She shared he website with flow charts, guidelines and cases. She ended her presentation with the proverbial elephant in the room.

Our first keynote Tracey Bretag from the U of South Australia offered us an image of an iceberg as a point of consideration for her Big 5 of Academic Integrity. She offered a definition from ICAI of the 5 fundamental values of academic integrity which, as she pointed out, is excellent on many levels. One of the most interesting thing that she explained was the findings of the policy analysis that her project undertook in the Australian sector. It shows how far we (they’ve) come but also how far there still is to go. She listed the 5 core elements of exemplary policy:

  • Access: easy to locate, read and understand.
  • Approach: statement of purpose with educative focus up front and all through the policy s that it is consistent.
  • Responsibility: details for all stakeholders not just students.
  • Detail: making sure that it is adequate but not excessive.
  • Support: making sure that proactive and embedded systems are in place to implement the policy.

She then went on to report on the massive survey of students they undertook, explaining how it is different to other surveys that have been conducted because of its focus on policy. She offered some highlights of their findings which were compelling and which, as all good research does, uncovered more questions to consider. The discrepancy between home and international students and between undergraduate and postgraduate students is particularly interesting. She then went on to consider the role of managers and senior managers in this. Part of this discussion was unpacking the foundation concepts of Academic Integrity and how these are understood in the academy. She argued that it:

  • is grounded in action
  • is underpinned by values
  • is multifaceted and has multiple stakeholders
  • tends to be understood by many in terms of what it is not
  • is important in assuring the quality of the academic process.

She shared the deliverables of the project which look very juicy indeed and will be freely available. Her metaphor of the iceberg came back at the end which reminded us that them values that underpin this are huge and not visible. It’s fantastic to hear that her and her team have successfully bid for a OLT (Office for Learning and Teaching) funding to extend this work. I look forward to seeing more great stuff emerging from the research team.

After lunch Virginia Barbour shared her view from the editor’s desk. She shared her ‘staircase’ of academic ethics which includes things like fabrication, falsification and, of course, plagiarism. Amongst the data she presented was the rather astonishing statistic that 14% of respondents know of misconduct amongst others. I got a very real sense that the problems she is facing are very similar to those we find in the University sector and lends weight to those who argue that the impact of plagiarism as a student often returns in their professional life. This has triggered some further thinking about the ‘arms race’ with which we are engaged with our students and the fact that ultimately, if we don’t get them while they are at University, chances are it will catch up with them at some point in their lives.

She shared some suggestions as to how we might approach the problem of overcoming the barriers to combating plagiarism and other ethics issues:

  • accept there is a problem
  • accept that combating it will involved time, money and effort
  • that we improve detection (through the use of technology)
  • ensuring that we take action
  • tackling the root problem.

She acknowledged that the last three moved from being relatively easy to very, very hard.

She shared information about COPE and showed their website which has lots of interesting stuff including some decision trees, guidance and some cases. Questions to her were focused in the responsibility of reporting instances of plagiarism to employers but, as Virginia made clear, this is enormously complicated.

iParadigms Focus Group

Today I’m attending a focus group with some of he executive from iParadigms, many of whom are visiting the UK for the IPC conference. I’ll update this post during the day (including the user group being held in the afternoon) as I know people are keen to find out more about the fine detail of the future vision for this tool.

The day kicked off with some presentations from the executive, starting with Steve Golik: VP Product Management. He started by talking about the future of marking. He shared the growing vision for developing Turnitin to the point where Grademark becomes the centre of the tool rather than originality checking. As far as I am concerned, this is a very positive move. He offered some screenshots of the new designs that they’re working on some of which are derived from the iPad app. It certainly looks a lot cleaner and more intuitive. He used the term ‘directionality’ which had a few of us scratching our heads, but what it appears to mean is that you shouldn’t have to leave the context of the paper to do what you need to do to comment on and mark a paper.

There was also a vision of being able to use the tool formatively to support iterative development over several drafts of a paper. In practice this means being able to see how, where and to what extent a paper has changed from one draft to the next. With this becoming practicable, the idea of being able to offer meaningful iterative writing support might have an impact on assessment design as the option of marking multiple drafts of the same piece of work is certainly not commonplace in the UK. In any case, this would certainly be of use for postgraduate writing support.

He also made mention of the future vision for voice comments whereby comments made in the audio recording are linked to signposts within the paper. This effectively turns it into video feedback rather than simply audio feedback. This makes a big difference to the usefulness of voice comments with respect to feedback vs feedforward. The biggest news was probably the indication that they have taken our requests on board to make multiple marking possible and more flexible, The flexibility to share rubrics more broadly is a very welcome addition which effectively turns Grademark into a social tool and puts them in competition with iRubric.

He then turned to consider the future of analytics: a matter dear to this project’s heart. It’s interesting that keeping track of the amount of time marking takes is something they’re clearly considering: capturing this data has been notoriously difficult. Collecting data on vocabulary use was one I wasn’t expecting. It’s clear that they have a strong sense of how this data needs to feed into a wider data ecosystem to be of use to the Academic and Learning Analytics that institutions are trying to build. They are also proposing richer ways of tracking the extent to which students are engaging with their feedback which is something that many tutors report as desirable. They’re also proposing building an interventions tool in the form of automatic interventions. There was also mention of stylistic analysis and ‘stylometrics’ as a way of identifying ghost writing which is also something that many academic staff will find useful. The question of whether this might in itself constitute evidence or whether it might just be a trigger to further investigation is, I guess, a question for registry but it might be a trigger for a viva. This uncertainty as to the role this stylometrics might play was acknowledged by Christian Storm at the user group later in the day.

The final point is to make better use of crowdsourcing to ‘tune’ the originality ‘noise’ which is harnessing the professional judgement that is engaging with their resource. This would be of particular use for those disciplines (I’m thinking particularly of Law and Music) where particular turns of phrase are routinely used. This means that the more the tool is used the better it gets. So – some exciting and intriguing ideas emerging from that presentation and, I have to say, a compelling vision of their future development.

After lunch we heard more from the exec about the roadmap which is mainly stuff that is much closer to release. The most exciting developments from my point of view are the following:

  • Core Rubrics: this is designed to support State-level learning outcomes for the US secondary education sector but it might be useful for course and level learning outcomes in HE in the UK.
  • Receipt Retrieval: allowing students to access their proof of receipt in ways other than via email.
  • Flexible Grading: including letter grades and decimal points.
  • Simpler Rubric: which offers much more flexibility.

But by far the most groundbreaking advance is the new iPad app. We saw some glimpses of it today and it seems that it has triggered some good rethinking for the main online document viewer as well which seem like they will be positive. The planned launch date for it is January: just in the nick of time in terms of this project. It’s worth preparing academics for its use at the start of the academic year. The silence of typing on the iPad will be very attractive to my colleagues in music.

A personal voice? Audio feedback day at the Uni of Leicester

Today I have travelled to Leicester to hear from colleagues at the U of Leicester and elsewhere about ‘the whys and wherefores of effective audio feedback’.

The day began with an overview from Alex Moseley from the U of Leicester reporting on the two research projects around which today circulates. These projects (A Personal Voice and AUDIBLE) have focused on the pedagogy and the linguistics aspects of audio feedback. They found that the pedagogy and practice is dominated by podcasting while there is a dearth of evidence on the practice and effectiveness of audio feedback itself.

The first half or the day has been very hands-on with practical activities to give us a very tangible sense of the differences between written and audio feedback. We’ve experimented with lots of tools, including a voice recorder, adobe PDF on a PC, explain everything on an iPad and Jing on a PC. A lot of the discussion afterwards has focussed on the emotional side of things as well as the practical and the pedagogical. Lots of food for thought including what about audio feedback is different to written feedback and what about it is the same. You can see the fruits of our layout in the image attached to this post.

After lunch we heard from Jennifer Beard, the research assistant on the two projects that are nearing their completion here at the U of Leicester. It’s clear that the student and staff response to audio feedback has brought lots of positives. It’s interesting to consider that an unfamiliar accent or a speech impediment could be similar to bad handwriting for students in terms of finding it difficult to interpret.

Some of the key points the project investigated are:

  • Contiguity – associating the feedback with the section of their work to which it refers.
  • Working memory – looking specifically at the difference between chunked up feedback and holistic feedback
  • Personal vs community experiences – using audio feedback for group feedback vs individual feedback. Students feel like they are more of a class if they get the general feedback but they also value the individual and personal feedback available.
  • Summative vs formative assessment – that using audio feedback for Summative feedback riskier than for formative
  • Nuance – does the audio format change the way we phrase things?

They found that students preferences for the type of feedback varied from task to task which is interesting, but the need for a hybrid approach (with some written and some spoken) seems preferable. This, of course, raises a workload issue for academic staff who are Ffestiniog being asked to do more than they are currently. When the added load of returning audio feedback to students is factored in, the false economy seems even more stark.

Warren Kidd from the U of East London reported on his experience of using audio feedback in his role as a teacher trainer. He explained that voice is a big part of the learning in this discipline which makes the fact that they never gave or thought of giving audio feedback on assessment seem quite strange. He talked about coming to terms with the awkwardness and anxiety that comes with recording your voice (that is quite a natural response) in order to do this effectively. He says that in an assessment for learning context, feedforward is more important than feedback. He offered some really constructive advice on approaching the task of providing audio feedback: not to script them but to ‘skeleton’ them. This means doing a quick list of some dot points that the feedback will cover before pressing the record button. He mentioned, very convincingly, that audio adds ‘technicolor and texture’ to the feedback and that students are used to receiving audio from us.

He emphasised the importance of making students write something about their feedback and the impact this has on their learning. He went over some of the conversations that colleagues needed to have before using the audio feedback thinking through quite practical as well as from pedagogical grounds. Many of these questions are immaterial to the use of audio feedback in Grademark because if the way it works, but it reminds us that, if nothing else, the value of these kinds of innovation may be in the conversations they encourage us to have. He gave us an example of his own audio feedback which was fascinating to hear. Discussion about when and how to disclose the mark (again immaterial in a Grademark context) and how to break bad news (such as work that has failed or is referred).

Finally we heard from Hedley Bashforth at the U of Bath (heard in the sense that he was unable to join us and had send audio files instead). He talked about how he came to audio feedback for formative feedback. He made specific mention of the workload implications of having to return feedback to students being so significant as to make it not worthwhile. He then gave us an example of his feedback which was really interesting to hear.

Some things to follow up: it’s worth looking at the work if Andrew Middleton at U of Sheffield who has done a lot of work on audio feedback.

20120629-155118.jpg

ASKe plagiarism event: workshops and panel discussion

Jon Scott from the U of Leicester reviewed the research that has been conducted by the AMBeR project about the different approaches to plagiarism penalties across the sector. He went over the tariff strategy and then introduced Jo Badge’s research into the implementation of the tariff in different universities across the sector. The discussion was focussed on some remaining issues that fall outside the benchmark tariffs as they stand. These are specifically how we deal with the following in terms of penalty:

  • collusion within and between years and students
  • extenuating circumstances
  • a guilty plea
  • large projects
  • resubmission as a viable option
  • career impact related to professional bodies

This generated fascinating discussion which unearthed a lot of variance in practice which itself was interesting.

In the discussion mention was made of research by Robert Clarke from Birmingham U on contract cheating which is worth looking into.

Gill Rowell from iParadigms took us back in time to 2002 to the start of the roll out of Turnitin through U of Northumbria. She reminded us of the things we were concerned about 10 years ago in comparison to now. She asked us to consider the question of what has changed in terms of institutional perspectives and the big change is that it is now much more embedded into our VLEs rather than through the website itself and a greater engagement with it online rather than printed to paper. She then turned to consider the student perspective in terms of raising student awareness and improving the ease of use. The way that Turnitin is becoming the mechanism through which their assessment is managed is also a big change for them and we know, from this project, what a big impact this is having. Finally she turned to consider staff perspectives which was quite disappointing in terms of the impact on innovative assessment design. The issue of correctly interpreting originality reports and providing training for that is also something that needs more work. In discussion we were asked to consider whether it has made a difference. It’s certain that we are identifying more instances of plagiarism than we did before its use.

Panel discussion themes
Attaching the message of academic integrity to something else that matters in the institution
Linking this clearly to discussions of ethics and unfair practice is important but complicated
Focussing on the authenticity of assessment is important
Trying to draw the lines between acceptable and unacceptable practice in terms of proof reading and translation is difficult but needs to be considered