AI@UNE: Part 2

Published 28 June 2023

Check out AI@UNE Part 1 | Part 3 | Part 4

Workplaces and learning institutions like universities now find themselves having to adapt – rapidly – to a world containing generative AI. While the technology will undoubtedly spare us mere mortals some mundane work tasks, it prompts countless ethical and legal questions.

UNE is monitoring developments closely, conscious that generative AI will be integral to workforces in the very near future. Head of UNE’s Law School, Professor Michael Adams, notes: “As the capabilities of artificial intelligence grow, leaders need to be purposeful and thoughtful about how they deploy it, understanding that AI is at its best when it’s teamed with human intelligence."

In the second instalment in our AI series, UNE staff explain where we stand.

NB: Since this series was launched, Aaron Driver has been appointed as AI Integration Lead, Faculty of Science, Agriculture, Business and Law.

Michael Adams of UNE's Law School

Professor Michael Adams.

What is UNE’s position?

UNE encourages Unit Coordinators to take a balanced approach to generative AI tools in learning and teaching, considering discipline-appropriate applications, educating students on appropriate use, and assessment design that maintains academic integrity.

UNE’s policy position on academic integrity remains unchanged – under existing policy, academic misconduct includes ‘presenting under the student’s own name, work substantially written by someone else’, and this includes the use of generative AI tools.

How is generative AI likely to impact higher education?

Students are already using this technology, so they need to be aware of how to use AI responsibly.

UNE leaders in this space caution that generative AI raises questions beyond academic integrity. Across higher education, the new tools are likely to impact course design and delivery in profound ways. Management practices, processes and administrative systems will also change to both take advantage of the technology and mitigate against its misuse.

Aaron Driver.

At UNE, discussions are continuing about the uses and shortcomings of generative AI. This includes consideration of the ethical dilemmas its use can pose, the bias inherent in the Large Language Models it relies on, and associated intellectual property and copyright issues.

“Students are already using this technology,” says Aaron Driver, Lecturer and Academic Integrity Officer with the UNE Business School, “so they need to be aware of how to use AI responsibly - and, if not, the penalties that will ensue. Lecturers within UNE’s Business School are certainly testing against AI, to ensure that assessments and AI detection tools are robust. However, while concerns about academic integrity are front-of-mind, AI will have much broader impacts on the way we work. Assessment design, for example, is already changing.

Generative AI will allow us to empower our students with tasks and tools that would have been out of reach, due to cost and complexity, just a year or two ago.

“In time, AI will enhance the quality of our teaching. Generative AI will allow us to empower our students with tasks and tools that would have been out of reach, due to cost and complexity, just a year or two ago.”

Aaron believes that working with generative AI opens up possibilities to make our lives “more productive and to improve the quality of our work”. “Obviously, there are many downside risks, which is why we need to customise and humanise the output,” he says.

No photo description available.

Dr Wellett Potter.

UNE’s Learning Media Team has already found applications for AI in 2D and 3D graphics texturing, video captioning, stock image generation, and computer programming. It’s also helping to speed up development, quality and efficiency on the coding front.

“I can see a time when we incorporate chatbots into our myLearn sites and emails, to help automate responses to the myriad administrative questions we get,” says Dr Wellett Potter, from UNE’s Law School. “As lecturers, we have to start incorporating these tools into our assessments. We might, for instance, get the AI to write something and then have students critique it.”

How do we navigate the inappropriate use of generative AI? 

“We need to have some very serious discussions around how we are going to regulate AI’s use,” says Wellett. “The use of generative AI to create assessment content remains prohibited, unless requested or permitted by a Unit Coordinator. Students are required to write their own original content, but it’s complicated. If there is a strong likelihood that the student has used AI, how do we determine authorship or the originality of that output?”