This webinar explores the multifaceted role of Artificial Intelligence (AI) in higher education, examining its potential benefits, inherent risks, and the current landscape of its adoption, aiming to demystify whether AI is a hopeful advancement, a harmful threat, or merely a hype.
Mind Map
Click to expand
Click to explore the full interactive mind map • Zoom, pan, and navigate
Hello and welcome everyone to this Amber
and BGA webinar. I'm joined here with
Water St. and this session will be on
artificial intelligence in higher
education, hopeful, harmful or just a
hype. I'm looking forward to an hour of
really interesting insight and shortly
I'll be handing it over to our speaker
Stan Neil. Now before I do um I want to
remind everyone we want to keep these
webinar sessions as interactive as
possible. So there will be some slido
questions you can scan with your QR code
to answer and if you have any further
questions you can always pop it into the
zoom and Stan will be happy to answer
them at the end. Now if you miss
anything don't worry because um we are
recording this and the slides will also
be available in the postevent hot in the
postevent coms and on the ambj website.
Now without further ado, I'm going to
hand it over to Stan. Stan.
>> Brilliant. Thank you very much. I'm
assuming the screen share is working
okay for everyone still and we can we
can all see that. Fantastic. Okay. Um
yeah, thank you very much. As mentioned,
we try and keep this very inter
interactive. Um I've got a bit of an
agenda in terms of what we're going to
talk through, but um what I will do as
well is is give a bit of context on who
we are. So my name is Stan Neil. I'm
sector principal for education at uh
Watston's and I think before I get into
the detail of who waters are my own
background is is probably pretty
relevant here. So I joined um I joined
the business about three years ago and
pri prior to that I had worked a number
of different universities. So I'd spent
over a decade working in higher
education uh at different types of
institutions uh before I joined Watston.
So um I am not very much nontechnical
actually in terms of background um very
much a background in the sector
understanding higher education and the
challenges therein. Um I should say also
I'm joined on the call by my colleagues
Joan and Jen uh from Water's today as
well. So great to uh great to meet
everybody and be welcomed here. Um
hopefully this clicks through. So who
are waters? Well we are a technology and
business consultancy. Uh we've got three
offices in the UK. We've also got an
office in Australia. Um we provide
services in a range of different areas.
So we've got some of our service areas
on screen here, ranging from things like
cyber security, um mergers and
acquisitions, managed services, data and
AI, digital solutions and development.
Um and so we offer a range of different
of service areas. Um, and as I
mentioned, uh, we we've got three
offices in the UK, one in Australia. Um,
but we've got a real specialism in
higher education. And so I've got on on
screen there in terms of as a size of an
organization, just under 300
consultants. Um, and but across the
portfolio different clients that we have
when I just checked today, we've got 62
live projects in in education with 23
different institutions and organizations
in the sector. So that sector experience
is really relevant to what we do. And
one of the taglines I've got there is
that we're we're big enough to deliver
as an organization. We're a
reasonablesized technology consultancy.
Um but also small enough to care and
small enough to understand the
challenges that are kind of unique to to
education. So um and I've got some of
our different education clients on here
on the screen here as well. And you
probably know there's quite a range of
different types of institution. I know
not all of the um people on this call
might be as familiar with with the UK
higher education space uh you know
people from different contexts but just
to give a bit of of context to that
we've got institutions here such as uh
Manchester Durham St Andrews that are
kind of um the more elite Russell Group
institutions um and of varying sizes and
then we've got some of the newer
institutions such as T-side University
or University of Sunderland but then
we've also got alternative providers. So
for example, NCF on there um have an
entirely different model focused around
further education or that the lower end
of education. So we've got a real range
of of different clients in education. As
I said, we've got a range of different
technical service areas and over the
last couple of years, the big topic um
has been artificial intelligence and
it's something that is increasingly
cut across all of our service areas. So
there's a cyber angle to this, there's a
data angle to this, there's a
development angle to this, but it also
um touches all of our clients in terms
of whether it's those an institution
that's uh got one profile where they've
got, you know, a large number of
students, University of Manchester who
on there, they've got 50 60,000
students. We've got some smaller
institutions. We've got some that are
dealing on those 16 to 18 year olds,
some of the more conventional degree
ages where they're 18 to 21. they're all
uh I suppose grappling with the
challenges of AI. So I'm going to talk a
little bit first about the context of of
AI and education. Then some of the use
cases we're seeing in the sector and
those that we've been involved in
developing, some of the risks,
challenges, limitations that we face
when we're dealing with AI and some
effective strategies actually for
implementing AI solutions. And I will
try to answer the original question that
we posed I suppose as well around
whether this is uh harmful, hopeful or
hype. Um so so that's the rough agenda.
As we said at the outset, I want to keep
this quite interactive. So I will talk
for a bit but then there's a slide where
it'd be great if people could um could
participate in that and and a few points
where um we can have some interaction
and and hopefully some questions and
discussion at the end. So um really
looking forward to what people's
opinions are because actually the other
thing that I would say I referenced at
the start the different teams that we
have now with my background in education
one of the things I specialize in is
doing digital strategy work. So working
with um education clients to develop a
digital strategy for their institution
and that often involves going out and
speaking to staff and students and
different stakeholders who actually sit
out of it.
outside of the technology space. Um, in
doing so, I would say I AI is definitely
the most divisive topic um that we go
out and speak to people about. Um, in
the context of a digital strategy, if
you're talking about something like
Wi-Fi and network connectivity, everyone
agrees they would like the Wi-Fi to be
good. Um, when we're talking about AI,
some people use it all the time. They're
really familiar with it. They're really
confident. They're annoyed that they
can't use it more. and some people are
really worried about it, really hate it,
um, etc. So, you get a real breadth of
of um, I guess um, approaches and views
on AI in terms of people. Generally the
um maybe quality isn't great on these
memes but generally when we when we
think of AI in education one of the
challenges here is um the immediate
recourse and lots of conversations is to
think about plagiarism uh to think about
academic integrity. So we've got a
couple of relating to people using chat
GPT to make up scholarly articles that
they can reference or uh you know using
it to plagiarize and then failing. So
that is typically where a lot of
people's heads go immediately when they
think of education they think of AI it's
thinking about academic integrity would
students you know be cheating on
assignments how can we um guard against
that and to some extent what I would say
is starting that from that quite risk
averse um basis so a lot of institutions
that we work with they may already have
policies around AI but it's been very
much developed by those in the kind of
teaching and learning domain. So the
academic staff, it's not really
necessarily been involved with the wider
organization and engagement say with it
or the technical teams um who might be
looking after systems that have AI
capabilities etc. Um so we're often
starting from quite a worried risk
averse space which is focused
particularly on as I say student usage
uh plagiarism and those kinds of issues. So
So
thinking then about in a bit of broader
context what are some of the use cases
and other areas that we might want to
think about. Um
and I've broken these into three
categories. I think broadly it falls
into use cases that relate to students,
use cases that relate to teaching staff
and use cases that relate to I guess the
operational function of of um different institutions.
institutions.
So starting with students on the left, I
mentioned there the worries about
plagiarism. One of the things that we
have seen is people starting to figure
out how you could have tools that are
approved tools that students can use to
assist with, for example, research,
right? Finding those articles they need
to they need to reference, helping them
in terms of providing feedback. Feedback
often in quite a general sense in terms
of writing advice, that kind of thing.
um can it make their experience better?
Can it speed up the enrollment process?
For example, I've got a use case which
kind of linked to that in this slide
deck. Um also thinking about from a
student perspective, you know, these
students are going to be entering a
workplace um where there's an assumption
that they'll understand a little bit
about AI and how to use AI as that
becomes increasingly part of everyone's
working lives. And so setting them up to
actually be able to to use AI tools. In
terms of teaching staff, we're thinking
about things around feedback and
marking. Okay. So, how can AI enable
that, speed that process up, um allow
them to spend more time, quality time
with students? Can it help with lesson
planning, course planning, um that kind
of stuff? Can it help with target
setting and different differentiation?
and the sort of data analysis
capabilities of AI um be an aid to
teaching staff as well. So that's some
of the conversations we're seeing there.
Um but we're also seeing I suppose
operationally more of those backend
functions. How can AI support those? So
can it be used as a chatbot for
marketing and recruitment? Can it help
to attract students to an institution?
Can it be used to speed up I've
mentioned enrollment a couple of times
but can it be used to speed up
enrollment processes that might be slow
and that's where institutions are losing
learners because actually that taking
too long they're going somewhere else um
what about things like room bookings or
resource optim optimization
um and there's a use case that that I've
encountered around policy documents for
example right institute has loads and
loads of policy documents they all cross
reference each other can you use AI to
make sure that if a if a policy changes
in one domain that has a knock- on
effect elsewhere that it's tracking that
and making sure that everything remains
aligned. Um so lots of different ideas
and use cases there. Um and this is
where I would encourage people as I say
we've got a slideo
um where the question is what are the AI
use cases you think have potential to
improve student experience save staff
time or deliver operational efficiency
in education. So, um I'll leave that on
the screen for a moment so people can um
can follow that QR code uh to to leave
some comments. And what I'm going to do,
I'll leave that up for a moment. I'm
going to talk in detail about two of the
actual use cases that we've implemented.
Um and whilst I'm doing that, hopefully
you guys are able to throw some of your
ideas into that slideo and then I'll re
revisit that um after I've spoken
through the use cases. So hopefully that
makes sense. I've thrown some ideas out.
I'd be keen to hear what ideas people
have, what people are interested in,
throw those on there. Um and then as I
talk through some of the examples, we
can return to them um and sort of see
what see what's coming from the the
wider community and particularly
interested to get the perspective um
from yourselves as as business schools
as well and thinking about you know I
think your students in particular um I
guess would be expected on entering the
world of work to maybe be engaging with
these technologies and having an
understanding of these technologies as well.
well.
Um so yeah I can start I can see that
there are some things coming through on
the slideo. You can also I think through
slido sort of upvote um other people's
uh suggestions as well. So feel free to
for you to do that and we'll we'll
revisit that uh in a moment. So I'm just
going to talk through two um
use cases that we've actually
implemented as as an organization. Um,
and the reason I've picked these two is
I think they're quite different ends of
the the spectrum there of how easy they
are to implement versus how complicated
they are. Um, and they highlight some of
the um benefits that can be delivered
and then hopefully this is uh an aid
memoir to us then thinking about what
the risks are and how we prioritize the
deployment of artificial intelligence.
Um, so the first example we have is
around uh document checks. So the
problem statement we had with this
client um was that they their students
were required to upload ident evidence
of identity as part of an application
process. So this wasn't actually a um
university example. It's a further
education example. So um we this is
learners doing apprenticeships and NVQs
in in the UK.
But as part of that application process,
they had to upload a passport or
alternative ID. So something like a UK
driver's license.
Um and what students were doing is a
large number of their students on
application were uploading
um any picture that they had on their
desktop or on their phone just to get
through the process, right? They
couldn't be bothered to go and get a
picture of their passport or driver
license. They just upload a picture
because they can't they can't be
bothered. They get through the
application process. That institution
then receives this application. They go,
"Yeah, this would be a great student."
However, they haven't uploaded the the
requisite documentation. The staff are
then having to contact the students and
say, "Can you upload a driver's license?
Can you upload a passport?" A lot of
people don't reply or ignored them. And
so, it's wasting uh time of the staff.
Um it's meaning that a number of
students who they would have given
places to are not actually sort of on
boarding and finishing that application
process because um you know because
they're not uploading those documents.
So the solution was using Azure document
intelligence. So that's an application
within Azure. So um those on a Microsoft
kind of infrastructure can implement
this fairly easily. It's it's already
part of that application stack. Um and
basically training document intelligence
to verify that these are the documents
that we're after. So um it's basically
saying right yeah this is a driving
license, this is a passport. Um, and the
other part of that training is to be
able to grab data from it to speed up
the application process. Um, so the the
tool is going right there's their home
address, there's their date of birth,
there's their name, and we can
prepopulate the application form. Um, so
the training is is the main challenge
here. Can you get that application to
quickly um discern that this is the
right document and pre-populate that
form? And I guess in terms of the
benefits then the benefit is that the
institution isn't having to waste as
much staff time going back to students.
And it might be that the odd one sneaks
through that isn't uh isn't the right
document. Um or that actually there's an
expired passport that's been missed and
they have to go contact the student
anyway. Um but it's certainly saving the
institution time and the staff time. But
it's also speeding up the application
process for students because you're
saying give us your driver license or
passport now and actually that'll
pre-populate the rest of the forms and
you can have a decision you know in
minutes on to whether you can get onto a
particular course. So there's two
benefits to that in terms of setting
that up. The other thing to mention on
this one is it was developed very very
quickly. Um this was sort of designed in
in an afternoon and within a couple of
weeks was a live thing that that
institution was using to to help screen
these applicants coming through for
these um these apprenticeship courses.
Moving then to the other end of the
spectrum in terms of difficulty
and I guess the size of the challenge,
we've got one around here around learner
intervention. So
students obviously withdraw from their
studies for a number of reasons. Um, and
I won't be s I wouldn't be surprised if
there is stuff um on the slideo around
this topic of of trying to understand uh
when students are are likely to to drop
out. Um, and I guess part of the
challenge here is there are a number of
different um markers of whe whether that
student's going to drop out. It could be
their attendance is poor. It could be
that they've not logged into the virtual
learning environment for a week. It
could be that they've got poor grades or
grades have actually been getting
progressively worse. There's a number of
risk factors that could uh indicate that
a student might be struggling in their
studies and uh liable to drop out. Um
and this example that we've just
finished a project. So it'll be
interesting to see this what impact this
has over time. So this this one just
finished a couple of weeks ago. Um but essentially
essentially
what it was was using AI and I think
this was a solution built in data bricks
um to look at these different variables
identify those students who are at risk
um of withdrawal and rather than
notifying a personal tutor um and
there's that kind of time lag for that
that person to get in touch who's there
um to essentially send an immediate
message on WhatsApp that sort of said
hey we noticed that you've um not logged
into the virtual learning environment in
in a while. Do you need a hand with
anything and signposting them to further
support? So essentially using artificial
intelligence to speed up that
intervention. Um now this one's more
much more challenging to build and to
do. Um partly because there are
disperate data sets. So there's data in
different places that are relevant to
that student. Um there's also a lot of
sensitivity I guess around some of these
interventions. the reasons why students
may um may be struggling. So there's
there's quite strict boundaries to put
in place around the the artificial
intelligence and what decisions it's
making. Um and I think the thing that'll
be interesting over time is to measure
the success of those interventions. So
as I speak, I know that we've built
something that that does this
intervention. It'll be interesting to
see if it has an impact on withdrawals
over time. So that's something that
we'll have to um we'll have to to
measure and obviously there's a dual
benefit there if on the one hand um
there is the student who this you know
in some cases this could reduce students
who are withdrawing from their studies
that that would be really positive for
them but also there's a focus here on
institutional finances the cost that has
to the institution of students dropping
out. So
there's a couple of um use cases that
we've actually developed. So that's not
um I guess going back to the original
premise of this this conversation that
is not hype. Those are things that we
have done. Um
and but I guess what they highlight is
not all AI use cases are equal, right?
Some of them are are low impact. Um some
of them are high impact. So something
like the document checker is maybe
saving time for a couple of staff who
were who having to do that role before.
something to do with I guess learner
intervention that has an impact on all
of those providing support as personal
tutors for example um one of them is
pretty low effort and low cost one of
them is a bit more high effort and a bit
more high cost so we've got to measure
these different things in terms of
what's low impact what's high effort
thinking about the feasibility to
implement them what that value is that
we're going to get from it what the
ethics are around this um as I said I
don't I think in terms terms of the
document um ID that's a bit easier
because people consent to giving their
ident identity over anyway when they're
when they're making an application in
something like the student withdrawal
case there's a bit more sensitivity
there in terms of what might that
student I hold to an AI tool and and
putting those appropriate checks and
balances in place um so that's some of
the considerations that we have I'm
going to at this point look at the
slideo and just comment on some of the
things that we've had on there. And I
certainly think it'd be interesting to
delve into some of these um
in more detail in terms of the um
questions as well and give people a
chance to to comment on here. So I'm
just going to move over to drag this in
and hopefully this works. Hopefully
people can can see that on the screen
there because the way I've been just
sharing screen but so uh looking at some
of those that have come through.
Um handling routine student queries,
deadlines, schedules. Yeah, absolutely.
Via via a chatbot. Um and often I guess
the challenge there is um and I guess as
I go through these I'll think about some
of those challenges when we we're trying
to plot it on that axis of what's low
effort. Um I guess the thing there is
what's the source data for that right in
terms of the head deadlines and the
schedules. One of the issues I've seen
with chat bots is they need a good bank
of sort of FAQs and data to be to be
referring to because otherwise they're
going to be giving students the wrong
information. So often that's that
prerequisite that makes that maybe a
higher effort use case that it that it
should be really. Um yeah, less an
assessment generation. That's absolutely
something um that I've seen talked
about. Um I think the interesting thing
there is there is almost in some ways
it can feel a bit like a double standard
where academic staff are saying well we
we should be able to use AI to plan out
our lessons and plan assessments but
there's you know students shouldn't be
using it to respond to those. So I think
it's an interesting conversation of well
how that balances against students being
enabled and empowered to use AI tools
themselves. Yeah. AI chooses for Q&A.
Exactly. And that links to I guess the
example that I just um discussed in
terms of uh you know AI can respond
24/7. If a student's having a challenge
or an issue or they've got a question,
it could be 2 a.m. that that occurs to
them and they want to ask that. Um, and
I it's not reasonable for their tutors
to be on online at 2 a.m. to respond to
it. Um, yeah. Um, personalized learning
paths, um, and sort of feedback. I think
those two are kind of related to each
other, but yeah, absolutely. That's
something that we're hearing people talk
about. And again, it's that drawing that
line in in the case of feedback,
interestingly, and this something I'll
talk about in a moment. There's a thing
there around the human in the loop
element is is the phrasing that we've
been seeing a lot recently of okay AI
can provide that first level of feedback
but ultimately is there a staff member
there who um who I guess checks for the
quality of that and kind of validates
that and says yeah that's the kind of
feedback we want to be providing to to students
yeah using AI to yeah to to to to look
at survey results and try and identify
improvements. Yeah, that's a good
suggestion. And again, that one I think
is quite a low fairly low risk in terms
of that, you know, the AI could do the
analysis. It's not going to be
implementing the solution to try and
improve um education.
Yeah. Preparing case studies, chat bots
come up again, feedback again,
enrollment, recruitment, reward of
examiners. Yeah, that's really
interesting. I guess board of examiners
is an interesting example as well. Um,
one of the things that we'll come on to
when we're looking at that, um,
I guess that matrix of effort, um, high
effort, low effort, high impact, low
impact. One of the phrases I use quite a
lot is use it on what slows you down,
not what makes you special. Right? So,
you know, in the context of UK higher
education, and I'm sure it's the same in
other contexts as well, quite a lot of
institutions had to cut back on staff
numbers. And so, when you're talking
about AI, people are thinking, are you
just going to replace everybody with
artificial intelligence? And the way I'd
phrase it is actually could AI reduce
the time that you're sat in board of
examiners meetings half hours on end,
and enable you to spend more time with
students um and doing the stuff that's
really, really valuable.
Definitely stuff here around uh
workflows and approval steps and
communications. Yeah,
creative industries is an interesting
one actually and in work strategy work
we've done previously. there is an
element of uh you know your design
students, your photography students,
there are creative fields, music that
are actually really directly impacted
and there may be skills and um things
that are expected in the future from
employers for students in those domains
and so they need to be they need to be
um across those.
Yeah. Okay. I think a lot of those are
are getting to some of those really
similar um use cases which is great to
see and I'll be interested to see what
comes through in some of the the
questions at the end as well. Um but no,
they're great. So um thank you for for
that and we'll as I say we'll hopefully
revisit some of those questions at the
end. So, I've spent, I suppose, a little
bit, I guess, the first half of what
I've been talking about there is being
really positive and trying to say,
"Yeah, we can use AI." You know, don't
worry about the plagiarism stuff so
much. Um, we can we can use AI. We've
got some use cases. Some are more
difficult than others, etc. That said,
we can't be entirely cavalier about this
and just go, "Yeah, you can all crack
on, use AI for your research or use it
for your feedback." And I think I've
hinted already a little bit about some
of the risks and some of the the things
that we do to mitigate those risks. I
think the first risk is there is always
um it's always worth remembering that
there are issues around accuracy of
information and bias and I think we we
were having an internal meeting
yesterday uh where we were talking about
you know AI is just another technology
and it's similar to you know all of
these other technologies and I think
there is one caveat on that which is if
you compare most artificial intelligence
tools to something like
um a piece of code someone's written to
build a power app, right? In Microsoft,
you could run that power app a hundred
times and you'll get the same result 100
times because you have told it when X
and Y happens then zed has to happen and
it will go yeah okay X Y Z XY Z
artificial intelligence will sometimes
not do what you expect in a different
way. So there is definitely something in
that around that that accuracy
information and that's why that second
bullet point is really really important.
So having that human in the loop um
human in the loop element is really
crucial to say that actually
someone is going to check this
information. So just to reference a
different uh use case which we're
working on at the moment is around
students who have applied to a
university and they might have um they
might be eligible for for funding or
scholarships uh based on certain
criteria. So we can use an AI tool
because there's obviously lots of
applications that come in, lots of data
and the AI can check who is eligible and
who is not for that funding, right? Um
so that's fine. What we're not doing is
saying that the AI then decides. So the
AI does the first check of these are the
students we think are eligible. The
person who presses the button to say
yeah that person's getting this the
scholarship. That's still a human being.
Um and so that's really important that
we don't just fully automate and go yeah
AI can solve that. There's got to be a
human in the loop who has that final
decision of okay the AI thinks these
five people are eligible. Yep I can
check they are eligible. Great. And
actually, you also probably need a um an
appeals function for people who are not
determined to be eligible as well. So,
it can massively speed things up. It
stops someone looking at all of this
information as long as there's a human
making that final decision. I think
there are obvious cyber security risks
which I'm sure um people will be
familiar with in terms of AI uh being
used as a as a tool for people to access
stuff. they they shouldn't be able to
access. Um, and I guess that's something
that we see increasingly people cyber
security teams and functions being
mindful of. Um, there also data security
risks, excuse me,
data security risks in terms of um, data
being breached and I guess there's an
element here of one of the things we're
really keen to do is direct people
towards the safe tools and the safe use
cases. And I think if you say to your
students, for example, AI is never
acceptable. Never ever ever never use
AI, it's going to push them to using
unsafe tools. Um, similarly around
staff, for example, if you enable
co-pilot, for example, as something that
staff can use, it's really important to
encourage that because what you don't
want is then putting, for example, data
that shouldn't go out into the public
domain to a tool that isn't within your
environment. So for example,
you know, if you've got co-pilot
enabled, and I don't know how many um
institutions on here, you know, people
have have taken that step of enabling
Microsoft copilot. If that's the
approved tool and that data stays within
the organization, that's great. What
you've really got to work make sure
people understand then is don't be
putting data that should be staying
within the organization into something
like chat GPT or Deep SQL, something
like that that's external. And the final
point there around academic integrity
and value for money. The reason I've
phrased it like that is and I've been
keen at the start I'm not going to talk
a lot about plagiarism and and that kind
of thing. I think there maybe is a risk
um for example we talked about chat bots
uh and came up quite a lot in slido
if all institutions around the world
implement chat bots and students are are
using chat bots for for that
interaction. I wouldn't be surprised if
in a few years some institutions and
start saying we're not using any chat
bots and you can speak to a human and
that becomes their USP. Um so I think
there's a balance there in terms of um
providing that that student experience
and value for money.
Um ethics is also a big part of this. So
having a clear AI policy for students I
think I referenced that most
institutions seem to have that. Thinking
about what tools are approved and that
that's available to people. As I say, I
I would caution against just banning
everything. I think you got to say these
are the approved tools and this is what
they're approved for. Um, is there an
option here on the third bullet point to
acknowledge that AI content and say
there's space there? People reference
research in the slideo people being
encouraged to acknowledge and reference
where they have used artificial
intelligence. Being really clear on
what's prohibited is important and
prohibited AI use cases. And then the
final point of who has oversight over
this, who's responsible for ensuring
that these policies are adeared to. The
other thing you you might have noticed
is I'm quite light here and I can talk
in a bit more detail if people really
want me to, but on the specific tools.
So a lot of the work we've been doing recently,
recently,
I'm saying to to to clients to
institutions, I'm not going to come in
and say you should use this tool for
this or this tool for that that because
I don't know in three, four, five years
time what that landscape's going to be.
But what you can say is this is the
governance process for deciding what is
acceptable. So you might have a a page
that says these are the approved tools.
Every three months a governance group
gets together and reviews those tools
and decides what is or isn't acceptable.
And so you can set out your governance
process for that now and that can be
reactive to if new tools come along, if
certain tools become less popular etc.
So that's a key point as well.
Um what are some of the limitations and
challenges? I think in the examples I
talked about a key one is data
availability and data maturity. I've
already mentioned that that one of the
challenges around chat bots is what is
their data source. If for example you
have a chatbot that's meant to be there
to deal with student queries
um around examination you know
examination uh arrangements say
where is that chatbot looking for the
details of the examination arrangements.
If you've got some information in the
VLE, some information on your website,
some information in this handbook and in
a contradictory,
how does the AI know what to look at?
So, there's a there's an element of data
availability and data material being set
up correctly before we implement um
certain AI use cases. In the example of
the document checker, when people are
applying, that's less of a problem
because they're coming in as new
students. So, the data is entirely new.
In that example of the learner analytics,
analytics,
that's more complicated because you do
have a student as an individual with all
of these data points coming off them in
terms of, as I said, when did they last
log into the the VLE? When did they last
um turn up to class? What have their
grades look like over the previous few
months? So, you've got more data points
that you need to pull together.
um deciding where to invest um and
getting that investment is a challenge.
I think the and part of the challenge
here and I'll be interested to hear from
uh people on the call their thoughts on
this around
one of the things I hear quite a lot is
there is pressure from from senior
leadership teams from vice chancellors
etc to say we want to be using AI you
know AI is the future they've heard so
much about it it's in the news
government are talking about it we need
to be at the forefront of using it now
when I come along and say you should use
it to check documents that's not maybe
quite as sexy or interesting or
inspiring as they were hoping. Um, and
so it's getting that investment in the
right tools. You know, you can spend a
lot of money on something flashy, flashy
AI AI tool, but a lot of use cases we've
been involved in are a bit more
practical in terms of delivering
tangible benefit to people.
Um I guess that links that expectation
management for those nontechnical
stakeholders in terms of what the
capability is of AI and also just
because it is capable of things what
parameters we want to put around it. You
know when I gave that example of the um
of the tool that is looking to see if
people are eligible for scholarships
um you might think well let's just
automate the entire process and that'll
make things easier. Um but but there's
reasons to put limitations there and
it's being able to articulate those to
to different stakeholders.
The final point I've got on here is
culture and digital literacy. And as I
said, I think there's a huge um
difference of opinion between your your
students uh your staff um in terms of AI
and these topics. As as as you'll
notice, I've got my catchphrase here at
the bottom on using it on what slows you
down, not what makes you special. I
think is a key thing here. And when we
work with different institutions and we
say you know they might say do you know
what we're really well known for our
specialist uh you know medical labs and
that's the kind of that's the thing
we've built our reputation reputation
on. I'd say we'll keep AI away from that
for now right and keep doing what you're
doing there. Let's try and understand
what bits of your organ or organization
aren't working as well as they could or
are working slowly and we'll use AI to
So, how do we actually go? You know,
we've got loads of different use cases,
loads of different ideas that people
have thrown together. They've been
fantastic. Um, the thing that we've been
supporting different clients with is is
AI strategy work. So, these AI
strategies generally cover these
different topic areas. So, I've
mentioned there right at the end of the
last slide, culture and engagement.
Okay? being clear with people, this is
what's approved, this is what isn't. And
we're going to support you, train you,
provide guidance um on using the tools
rather than hiding away from it. What
the governance process is, how often
these policies are going to be reviewed,
how often we're going to look at what
tools are acceptable and which ones aren't.
aren't.
Long-term projects. So if for example
you really wanted to do the learner
analytics use case and say we want to
put something in that's going to look at
um learner performance and intervene to
to support learners and it might be that
actually you've got to make sure you can
get all of your data into a data
warehouse and into the same format and
aligned with the student ID. Um so that
might be a longer term data project. Um
but you can say well we're going to do
that first and then we'll come to the
the the use that use case later. I think
it's also worthwhile having some of
those little agile use cases in play. So
that can range from stuff like the
document ID check which is quite easy to
put in place or a pilot of things like
Microsoft copilot say do you know what
we're going to let this team use
co-pilot um to see how it can can help
them and support them in what they're
doing and find those little little use cases.
cases.
The other way to think about this
um and we've been supporting people with
this is on three different levels. So
you kind of got the personal level. So
you as an individual tools that can help
you in your in your working uh life then
at a team level. So are there team
functions that we could use AI? So that
that document um sort of ID checker
example, you know, there's a there's an
admissions and enrollment team there who
happen to pick up that work. that tools
changing how their team works and then
anything that's organizationalwide
thinking about what are the big things
that that you could do that could
improve everyone's experience as well.
So there's a few different ways that you
can kind of split this up in terms of um
in terms of this. I think the other
thing to be mindful of that gone here is
thinking about your maturity and the
different levels of maturity that you
might have. So it might be that actually
you've got some good um tools, you know,
you've got some good infrastructure in
place, you've got some integrations that
actually mean you could crack on and
start using some of these use cases. The
flip side of that might be that
culturally people are really resistant
to it and nobody's bought into it. Um
and that you haven't got any of the
compliance or governance elements that
you need in place. So it might be that
you're ahead in one area and behind in
another. So understanding your maturity
and where you are on that journey is
really really important.
So I'm I'm I'm going to stop talking a
moment and and uh and see what questions
we we've had come in. I think um
I don't know whe to sit on the fence or
not on this really. I think with most
clients um are are in one place or
another on this. they either are very
very worried and are very locked down
and I guess going back to the points I
was making start around plagiarism. We
have some clients who are are not using
this at all. We have some who have been
very relaxed and and things of the wild
west a little bit and people are using
it for all AI for all sorts of things.
Um and I guess we're somewhat sitting in
the middle of that and saying
you can use AI. There are safe ways to
use AI that make people's working lives
better and actually can enhance and
improve student experience. However,
there's also a lot of controls and
things you need to put in place to be
able to do that safely. So, I think we
sit in the middle of that of that
spectrum. There is a lot of hype out
there. Um, I would and I would say
there's probably some validity to
thinking as a hype thing because
actually, you know, fundamentally there
are some things about the student
experience, learning experience that
that wouldn't and shouldn't change. But
I do think it's here to stay. There's
nothing we're going to be able to do to
get rid of it. Um, and so I'm somewhere
in between harmful and hopeful. I think
we can balance
um balance those two things. We don't
want to be too risk averse where we
don't use it at all and it pushes people
to using unsafe tools. Um but we don't
want to just use it for everything. We
need to use it where it's sensible and
have appropriate controls in place. Um
that was a very fancy city and um kind
of proaricating answer. The final thing
I guess is a bit of a call to action.
I've got another QR code on screen here.
um where if you want to put your details
in there and um we're going to pick
three winners uh to have a a bit of a
readiness workshop and going back to
that maturity level where we can kind of
benchmark where you are uh and give you
some suggestions. So a kind of two-hour
workshop and we'll give you some some
recommendations out of that. So if
people are interested, please do follow
that QR code, provide your email and we
will pick um some winners. Um if uh
don't worry in terms of if you're not if
you're not chosen we will get rid of
your email etc. We don't want to keep
any of that information but u do follow
that QR code if you're interested in
having I guess a further conversation we
could talk about your specific
institution and try and um and try and
assess where you are on that on that
readiness to to engage with AI. Um
that's me finished with my presentation.
I'm going to jump over
um to
questions if anyone has any questions. I
don't know if they're going to appear
for me in the chat.
There is a Q&A. Oh no, there's a there's
a question about the sound. Was the
Um yeah, I think that was it. That was
the only question. Um, I hope they
managed to work out their sound. I mean,
it was working fine for me. I was able
to hear you loud and clear. Um, but
yeah, um, thank you once again to our
partners, Water St. um, Stan, Joan, and
Jen. Thank you all for joining us and
for your, um, for your interactions. I
hope everyone else enjoyed the sessions
as much as I did. And of course um if
you missed anything don't worry because
the recording and the slides will be
available in the postevent coms and on
the ambj website by sometime next week.
Um so thank you everyone. Thank you for
joining us. Um and take care. Have a
have a lovely afternoon. Bye bye. Thank
Click on any text or timestamp to jump to that moment in the video
Share:
Most transcripts ready in under 5 seconds
One-Click Copy125+ LanguagesSearch ContentJump to Timestamps
Paste YouTube URL
Enter any YouTube video link to get the full transcript
Transcript Extraction Form
Most transcripts ready in under 5 seconds
Get Our Chrome Extension
Get transcripts instantly without leaving YouTube. Install our Chrome extension for one-click access to any video's transcript directly on the watch page.