àËÅöÊÓƵ

Skip to main content
Centre for Online and Distance Education

RIDE 2024 - Day 1 Morning

Blog information>

Date

Authors
Clare Sansom
RIDE 2024 logo

The eighteenth annual conference of the Centre for Online and Distance Education (CODE) at the àËÅöÊÓƵ – or RIDE – was, for the second successive year, a hybrid meeting. The face-to-face meeting was held in Senate House at the àËÅöÊÓƵ and every session was also streamed online. CODE director Linda Amrane-Cooper welcomed delegates by emphasising that this allowed speakers and delegates from across the globe to participate fully. She also introduced the theme of the meeting, Learning: Anything, Everywhere, but How? The world’s knowledge is ‘increasingly available with the swipe of a finger on a mobile device’: but what does this mean for learning? How can learning online and at a distance be best facilitated for students and those who teach them?

Linda then handed over to the Vice-Chancellor of the àËÅöÊÓƵ, Professor Wendy Thompson CBE, to welcome all delegates to the meeting and those ‘in the room’ to its venue, Senate House. Wendy described very briefly the history of this unique building, completed in 1937 and thought to have been one of the inspirations for George Orwell’s 1984: ‘we even have our own Room 101’. Later, it appeared in films from V for Vendetta to the Harry Potter franchise. As the centre of the àËÅöÊÓƵ, which has over 40,000 registered students from 190 countries, it is also an ideal venue for RIDE with its focus on online and distance education. She welcomed three visiting scholars from Brazil and China and thanked Linda and the CODE conference committee for their hard work before handing back to Linda to introduce the first keynote session. 

Linda began this introduction with a rhetorical question: what has been the hot topic in higher education over the last year? The answer, she said, had to be artificial intelligence: ‘have we talked about anything else?’ The conference would start with two back-to-back keynotes taking contrasting approaches to this important topic. She then introduced the panel of three speakers to present the first, a ‘discursive keynote’ on generative AI: Donna Lanclos from Munster Technological University, Lawrie Phipps from JISC and Richard Watermeyer from the University of Bristol. 

Neoliberalism, Collegiality, and Authenticity 

The keynote was opened by Donna, who explained the cross-disciplinary background of the panel. She is an anthropologist, concerned with the nature of digital and physical information; Lawrie, who was originally trained in environmental science, researches digital leadership and digital transformation at JISC, and Richard is a sociologist of higher education. Their talk would focus on large language models such as ChatGPT that synthesise information across the Internet to generate text. It would not touch on their programming but would explore how academics behave when they are confronted with these tools in their practices, and what this tells us about academic values. And she added a disclaimer, acknowledging both her team’s personal benefit from the tools they research and those tools’ intrinsically exploitative nature. 

She then handed over to Lawrie to describe, briefly, a survey of professionals from a broad range of disciplines in UK universities carried out in spring 2023. There were 428 replies, 284 of which were from individuals who self-identified as academics. About half were already using generative AI and almost all the others anticipated that it would impact their future work ‘whether they liked it or not’. 

Richard then explained how generative AI fits into what he called the ‘landscape of academia’ in 2024. He noted that while much of their data was based on UK-wide surveys, their work is international and much of these findings are echoed in other countries including the US, Australia and Singapore. But the situation in the UK is among the most extreme. One survey response from a UK academic sums it up well: ‘UK HE is utterly broken, with an intrinsically corrupt, poorly led business model driven by pseudo-metrics’. To use a Taylorist metaphor, staff – many on short-term, precarious contracts – are committed to a ‘production line’ of endless output, in a largely hostile environment and in competition with each other. The inequity of this situation, which sidelines anyone with caring responsibilities, was present before the COVID pandemic but has been exacerbated by it, and by associated crises in physical and mental health. There are also multiple funding problems, exacerbated by Brexit, and it is not even certain that all UK HE institutions will survive in the medium term. 

So, how does generative AI fit into this bleak situation? Lawrie took up the baton again, using survey responses from academics to illustrate what he called ‘algorithmic conformity’ and thanking those who completed the surveys for their honesty. He is concerned by the amount to which these academics are using generative AI in their teaching, in their decision making and even in pastoral work. They may be thinking of the tools as if they were colleague – ‘someone to bounce ideas off’ – but these tools can only offer ‘data scraping’, not new ideas. In one example, if he uses an AI tool to summarise and combine interview transcripts, it will produce a reasonable, and potentially useful, concise summary but without any of the outlier responses that are often the most interesting. Algorithmic conformity arises largely because of pressure of work: overwhelmed staff under extreme time pressure will have to use them, almost unthinkingly, to cut corners.  

When AI is used extensively as a tool to increase efficiency, it does so at the expense of authenticity and creativity, and this doesn’t even ease the pressure as managers can always find more work to fill any time saved by using AI. Other problems can be a loss of collegiality and, of course, an exacerbation of the digital divide. If working with AI comes to be seen as an essential part of academia, where does that leave those countries, institutions or individuals who lack access to the hardware and software? 

Donna responded to the frankly dystopian vision that Lawrie had presented by returning to her discipline of anthropology, insisting that we re-focus on humanity. We created the university and we can re-make it to value the thoughts and processes that are unique to the human mind. Why would we outsource the workings of our mind to a machine? To quote the American activist and writer Rebecca Solnit, ‘if you don’t want to write, why pretend that you are writing’. We should always consider whether any of the ‘time, and mess, and failure’ that AI tools might save us is worth it. It is possible to say no. 

She concluded with a passionate argument for the value of human mess, human tentativeness, and human unfinished business: we are all much more than the sum of our creativity. She and her colleagues are not arguing for generative AI to be banned, but for much greater independent thought about when and how it might – occasionally? – be helpful. 

This compelling and thought-provoking, if at times deeply pessimistic, presentation was followed by a ‘world café’ type activity in which delegates ‘in the room’ discussed some of the questions that it had raised around their tables and those online responded to similar prompts in a padlet.  Points highlighted in a very short final discussion included the use of AI and ‘disinformation’ in election campaigns, particularly in the US, and, closer to home, in the metrics that mark student success. 

The Progress and Promise of Generative AI 

Where the first keynote had been philosophical and pessimistic, the second was optimistic and practical. Nicolaz Foucaud, the managing director for Europe, the Middle East and Africa of the online education platform , gave his talk the subtitle ‘Lessons from the Online Classroom. He explained that he would be presenting the ‘opposite side of the argument’ from the previous session, which he had found interesting as well as challenging. He hoped that his, too, would contribute to a ‘healthy debate’ around the impact of AI on learners. 

He then set the context with an introduction to Coursera, which was founded 17 years ago by two professors of computer science at Stanford with the noble aim of bringing ‘the best learning to every corner of the world’. He illustrated this with reference to the UN’s 17 Sustainable Development Goals for the years 2015-30. At Coursera, they believe that Goal 4, Quality Education – if delivered accessibly to all – should be a driver and enabler of all the others, from tackling poverty to gender equality to climate action.  

Quality education provides a route out of poverty through skilled and well-paid employment. Nicolaz presented data from first 2017 and then 2023 to show how the advent of AI is profoundly changing the world of employment and therefore of opportunity. Only seven years ago, research from Oxford University and the Bureau of Labour Statistics definitively showed low-skilled jobs to be at much greater risk from automation than those that required higher (post-secondary) education. Since then, however, AI has brought about a profound shift. High-skilled, high-paying jobs are now at the same risk of replacement by AI tools than factory-floor ones were by robots just a decade ago.  

Education is no less important in this new world, however: in fact, the reverse is true. The World Economic Forum has estimated that, by 2027, over 60% of workers will need retraining but only 50% will have access to this training. The skills that are, and will be, most in demand are ‘soft’ skills such as building relationships, negotiation, creativity, critical thinking and - most interestingly – ‘coping with uncertainty’. He and his colleagues at Coursera have noticed a shift in demand towards these skills from employers as well as learners.  

And another significant change in learning patterns has taken place over recent years: a much higher proportion of the 142 million registered learners on the Coursera platform are opting for short ‘micro-credentials’.  Coursera works with over 325 ‘educator partners’ from business and academia to provide different types of content in a ‘stackable’ format, ranging from over 200,000 5-10 minute clips, each introducing a particular topic, through courses and certificates to about 50 full degrees. Learners can build from shorter to longer form and tailor packages of content to suit their needs. 

And this is where AI as a tool for course development comes in. Working only with trusted, accredited partners from universities and companies, it is a huge challenge to provide the range of content that learners need, particularly as the greatest demand is for short packages to be combined and customised into pathways to degree-level employment. 1.8 million people have enrolled onto this type of pathway so far. AI may be disrupting employment, but it is also enabling access to high quality, effective learning.  

One particularly important way in which AI can enable access to educational opportunities is through translation.  Offering courses in one language only, or in a very few, limits their take-up and therefore their impact. Using AI in the form of machine learning for translation can greatly accelerate this task and increase the number of languages available. 

Nicolaz then used a brief video to illustrate how AI can now impact translation. This showed the CEO of Coursera Global, Jeff Maggioncalda, speaking initially in English, his native language, and then apparently switching into many other languages in turn. Jeff is not necessarily a skilled linguist: it was AI-based synthetic video technology that made him appear to be speaking those languages. The inclusion of authentic facial and mouth movements, rather than just textual subtitles, makes the translation much more accessible. Introducing this technology comprehensively throughout Coursera’s portfolio should make its courses accessible and engaging for hundreds of millions of potential learners. This is essentially the same technology that is used for the ‘deepfake’ videos that now plague the Internet, but Coursera’s use is strictly regulated by its Responsible AI principles.  

Coursera is also using two other AI-based tools: Coach, which acts as a ‘partner’ for students to talk through parts of courses that they find difficult, and Course Builder. This allows educators to search within the catalogue for material on specific topics and build new courses by incorporating their own material. This is a straightforward process and, to address one of the questions posed in the first keynote, the obvious questions about ownership and IP are covered by the same Responsible AI principles.  The àËÅöÊÓƵ is one of Coursera’s academic partners, and all its content on the platform is governed by a strict IP agreement. 

These two contrasting keynotes gave delegates much to think about as they moved into the first set of parallel sessions. Perhaps the contrast can be encapsulated by the thought that, like so many things, generative AI could be a good servant but a bad master?

CODE will be continuing its discussions of this fascinating and important topic in further events this academic year: webinars on AI in learning design on 8 May and the annual Supporting Student Success workshop, with a focus on assessment, on 14 May.