In an ever evolving A.I. landscape, how do Higher Education Information Services step up? Beth Burnet and Jon Phipps from the University of Essex Library Services take a dive into the world of AI and explore how HE libraries and information services can respond and support their users.
The A.I Landscape
When we hear people talk about Artificial Intelligence, what comes to mind?
Perhaps it conjures up images of a Blade Runner-esque world, a future where robots rule us all, and city life has turned into a cyberpunk dream – or a nightmare.
Or maybe we can imagine a utopian society where all those tedious jobs you procrastinate over day-to-day are now automated, leaving you time to explore your true interests – and all from the deck of your sky garden in the clouds.
In reality, A.I is no longer a thing of sci-fi imagination, but is instead already being used – and not just by high-level tech companies, but by you and me, every day. Think about a casual scroll through TikTok or Instagram, and the algorithm that delivers catered recommendations and ads to you – it’s all A.I.
A.I from a Library Perspective
For us at the university, A.I can be even more relevant to our daily lives; from translating articles to captioning videos, and from checking for plagiarism in assignments to searching for texts to cite.
But from a Library and Cultural Services perspective, how does A.I look?
Using A.I in library services can have many pros and cons: on the one hand, A.I can open up access to materials through better search tools, and can help with processing large amounts of data, such as when cataloguing books in the library. But on the other hand, there are undeniable issues with bias, where historical imbalances could still be carried into the future through technology that is programmed with these biases built into them (intentionally or not) – particularly in an education setting where historically, the voices that have been favoured most are those of cis white men, for example.
To show this practically, in a recent webinar on this subject hosted by academic e-resources provider and education community liaison Sherif (Shared E-Resources Information Forum), Dr. Andrew Cox from the University of Sheffield presented an example of using generative A.I to process 10,000 video files with only titles to search on. Of course, it is unrealistic to manually enhance the data of these records, so A.I might offer a more efficient and detailed search – for example, based on voice or image recognition, or by identifying music. However, this may not be 100% accurate, particularly when considering systemic bias, if the A.I is trained on majority voices, languages, or accents.
The cost of such a system could also be a large barrier, and this taps into a wider issue of equal accessibility of A.I for students. While higher level A.I systems might be out of reach for some, is there an argument to be made for a potential advantage for those who can afford to access them?
This Sherif webinar highlighted both the positives and negatives of the future of A.I in education, and gave insight into new ways of using A.I. For example, we learnt how Large Language Models have recently been used by the NHS to create informative materials such as pamphlets by inputting information and asking A.I to create from this, or for bulk emails. This raised the questions of whether something like this could be used by the library, or whether this would threaten creativity. There would additionally be a need to be aware of hallucinations as well as bias, which was highlighted in the webinar. However, despite some warnings in the webinar, there was the overwhelming takeaway that we can use A.I positively as long as we shape the future ourselves and work with students.
A.I is of course being used more and more all the time, so it is inevitable that it will be used more by staff and students alike. However, it does still feel like an unknown to a lot of people, and regulation surrounding A.I usage is still evolving.
Preserving Academic Integrity
Ownership and copyright are a large concern because of this unknown. In particular, there may be an impact on trust in information, and more possibilities for both intentional and accidental copyright violations, as A.I currently sits in a grey area regarding ownership. On 7 July 2023, ChatGPT 4 was made publicly available, making it clear that it is only going to get more advanced each time a new model is released. This results in an urgent need for more sophisticated and nuanced A.I detection methods to be developed for the sake of preserving academic integrity.
On 13 July 2023, at a Skills for Success staff event, ‘A.I and Academic Integrity’, the library’s Information Literacy Coordinator Jon Phipps demonstrated an A.I detection model that he has been working on. Rather than taking a one-dimensional look at A.I as many mainstream detectors seem to, Jon’s model views any text inputted into it from different angles to highlight possible uses of A.I. For example, it analyses lexical density and seemingly unnatural repetitions of words that can be telltale signs of A.I being used, rather than just looking at a singular pattern.
However, as A.I use and detection is not always black and white, Jon hopes that this sort of model can be used by universities alongside existing tools. Mainly, it can be used by tutors to flag potential uses of A.I, and then a discussion with a student about academic integrity can be started in a productive and non-accusatory way. As such, there is the hope that A.I too can be used to enhance university processes and be used positively.
An “Automated” University?
In the aforementioned Sherif webinar, Dr. Mark Carrigan of the University of Manchester showcased an experiment he created which explores the future of A.I in education, using ChatGPT itself for a response. The bot gave two scenarios of a more “automated” university: “Nova University”, which imagines labour enablement, and “Solitude University”, which focuses on labour replacement:
- Labour enablement – where machines enhance human productivity by replacing routine aspects of a role, freeing up time for non-routine work (such as creative projects and student support).
- Labour replacement – where machines substitute human labour, leading to the partial or entire elimination of certain jobs.
This experiment raises important questions about the role of A.I in shaping the educational landscape and prompts us to consider the implications of labour replacement versus labour enhancement.
It is also important to understand student needs and concerns about A.I. If A.I is becoming more widely used, then students and staff alike may need to develop more skills in information literacy, with an understanding of A.I’s future potential. Institutions should provide guidance and policies that focus on ethical considerations, clarify ownership rights, and promote fair use of A.I.
As Sue Attewell (Head of edtech at Jisc) explained in the Sherif webinar, student concerns can include:
- Information literacy – particularly navigating trust and accuracy
- Data security – safeguarding privacy and ambiguity of ownership
- Plagiarism detection – ambiguous guidelines and striking a balance with regulation
- Transparency and confidence with staff use, which links to:
- Equity in access - affordability and possibilities to subscribe on an institutional level versus individually
- Employability – skill development and the impact of automation
- Sustainability – concerns with carbon and human impact, such as workers being exploited in A.I development
Additionally, in a joint piece for higher education discussion website Wonkhe, Charlie Jeffrey (Vice Chancellor and President of the University of York) and the University College Union of York Executive Committee write:
“We are grappling with a broken funding system which systematically underfunds the true costs of both home undergraduate teaching and research and leaves us reliant on other unstable funding sources.”
This highlights concerns that there is an economic incentive to explore – or exploit – how A.I could be used to cut costs, therefore affecting the standard of support that students can receive.
As such, encouraging open discussions with students is of the highest importance to allow them to contribute to shaping the future of their educational experience.
How the Library Can Help
When used ethically, A.I has the potential to positively impact students’ learning journey. For example, by facilitating more effective searches, providing translation services beyond mere word-for-word conversion, and engaging students in discussions about content, A.I can enhance the learning process.
Furthermore, despite challenges in the education funding system, libraries can play a vital role in shaping the future of A.I and automation. By incorporating A.I and data literacy education into their services, libraries can encourage students to use A.I tools responsibly. Libraries can enable discussions on ethical considerations, promote critical thinking, and provide guidance on responsible A.I usage. By encouraging collaboration and embracing technology as a tool for learning, libraries can empower students to navigate the evolving landscape of A.I in an informed manner.
Ultimately, striking a balance between the positives and negatives is crucial for ensuring ethical and responsible use of A.I in information services, and this is something that the library has a keen interest in. This past Digital Skills Focus Week (30 May - 2 June 2023), Jon Phipps also held an in-depth session for using A.I tools positively and ethically, a workshop created by Academic Support Librarian Oona Ylinen. This included tips for using chat bots such as ChatGPT, and apps to streamline workflow and enhance research.
The workshop also focused on tools such as Obsidian – which has A.I plugins available – and Notion – which has native A.I built in. These are both knowledge management platforms which can be used by students to help with their learning, such as by using them to keep track of their studies, and creating and connecting notes with metatags. Notion in particular can be used as a software sandbox, meaning that students can create their own interfaces, tailored to their preferred methods of learning. Jon Phipps describes it as an “Imaginarium” – a platform designed to encourage rather than replace productivity.
Tools like this can not only speed up the process of finding sources, but can also help with making connections between sources. In a recent exchange, Oona Ylinen wrote that these tools can “assist with evaluation and synthesis processes, and present the information students find in alternative formats, like turning text into visual or audio formats”. This is particularly useful as different students may have different ways of learning, or have specific learning needs that can be catered for in various ways. Oona continues that A.I tools can also be useful for “fostering idea generation and creating more structured research environments” as they can streamline the manual parts of research whilst ensuring that students are still learning – “in order to operate these tools, they must still be aware of the processes behind the tasks that these tools are assisting with”.
Workshops like these at the library are just one example of how integration of A.I into education and information services offers tremendous potential for enhancing learning experiences, expanding access to information, and streamlining admin tasks. When taking concerns about bias, trust, ownership, and regulation into consideration, we are hopeful that continuing sessions like this can encourage us to engage in open discussions between students and faculty. This would prompt institutions to give more time to developing ethical guidelines, shaping the future in a way that prioritises student needs and ensures equal access and opportunities for all.
More information: