Oct 3, 2024
Artificial Intelligence (AI) is rapidly reshaping various sectors, and academia is no exception. In a recent episode of the SA Voices From the Field podcast, hosted by Dr. Jill Creighton, guest Dr. Daniel Weisglass shared his expert insights on the role, challenges, and potential of AI in higher education. Dr. Weisglass, an assistant professor of philosophy at Duke Kunshan University, delves into academic integrity, student affairs, and the future landscape of education with a particular focus on AI tools.
At the heart of the discussion is the need for rethinking traditional academic assessments in light of AI advancements. Dr. Daniel Weisglass emphasizes the importance of critically evaluating the types of assignments given to students. He suggests that faculty members collaborate closely with academic integrity units to adapt their methodologies in response to the changing academic environment.
AI, particularly generative models like GPT (Generative Pretrained Transformer), can produce seemingly original essays and content. This poses a significant challenge to traditional assessment techniques, which often rely on evaluating written assignments. Dr. Weisglass advocates for the adaptation of in-person assessments to maintain academic integrity. Such measures echo the early days of Google search usage when educators needed to adapt to a new tool that changed how students accessed information.
Maintaining the historically valuable elements of student affairs is another critical point discussed by Dr. Weisglass. He underscores the importance of deep, meaningful connections and personal development in education. The role of mentoring and teaching in shaping students' experiences and growth remains as crucial as ever, despite the growing presence of AI in academia.
Dr. Weisglass suggests that while AI can support student affairs professionals by recognizing emotional patterns and raising alerts, it should not replace human interactions. The human aspect of teaching and mentoring is irreplaceable, and AI should serve as a supplementary tool rather than a substitute.
The current state of student affairs has seen an increased awareness and maintenance of campus cultures. Dr. Weisglass highlights the new challenges posed by AI-enabled academic and student conduct violations. With the advent of sophisticated AI tools, distinguishing between AI-generated and human-generated content becomes increasingly difficult.
To combat these challenges, Dr. Weisglass advocates for developing robust administrative standards for safety and security. He also highlights the necessity of continual responsiveness and adaptation to student needs. As student affairs professionals, it is essential to stay ahead of technological trends and ensure that the academic and personal growth of students is not compromised.
Looking ahead, Dr. Weisglass envisions a future where student growth remains the primary focus, without leaning too heavily on a customer service-oriented approach. He emphasizes that flexibility, continual responsiveness, and reflective responses are key to effectively preparing students for a rapidly changing world.
Incorporating AI into education requires a thoughtful approach to designing prompts and assignments. The goal is to make use of AI tools, like GPT, to support the development of labor-intensive skills such as ethical analysis. Educators need to balance leveraging AI to aid the learning process while maintaining the integrity and authenticity of student work.
Dr. Weisglass discusses various AI tools and their applications in higher education:
Predictive AI: This AI type forecasts trends and flags at-risk students based on data patterns, such as class attendance. It helps institutions take proactive measures in student support.
Generative AI: While capable of generating new content, generative AI raises concerns about academic integrity. This type of AI can fabricate information and compromise data privacy.
Gamma Tool and Copilot: Gamma converts Word documents into detailed PowerPoint presentations, aiding in educational settings. Copilot, part of the Office 365 suite, helps summarize emails and meetings, streamlining administrative tasks.
Cite.ai: This tool assists in generating literature reviews and finding specific articles within academic research, ensuring the accuracy and authenticity of data.
Dr. Weisglass also stresses the importance of ensuring data security agreements with AI tool providers or developing in-house models to safeguard student data.
The insights shared by Dr. Daniel Weisglass underline the transformative potential of AI in higher education, along with its challenges. The integration of AI tools, such as GPT, must be approached with a balance of innovation and ethical considerations. By rethinking academic assessments and maintaining the human elements of teaching and mentoring, educators can harness AI's potential to enhance the educational experience while preserving the integrity and personal growth of students. As we navigate this evolving landscape, the role of AI in academia will continue to be a dynamic and critical area of exploration.
About our guest
TRANSCRIPT (Unedited transcript created by Castmagic)
Dr. Jill
Creighton [00:00:00]:
Welcome to Student Affairs Voices from the Field, the podcast where
we share your student affairs stories from fresh perspectives to
seasoned experts. Brought to you by NASPA, we curate free and
accessible professional development for higher ed pros wherever you
happen to be. This is season 11, the past, present, and future of
student affairs, and I'm doctor Jill Creighton, she, her, hers,
your essay voices from the field host. Welcome back to a new
episode of SA Voices from the Field, where today we will feature
Doctor. Daniel Weissglass. Doctor. Weissglass is an assistant
professor of philosophy at Duke Kunshan University, which is a Sino
US liberal arts institution located near Shanghai, China. His work
focuses on the ethics of science, health, and technology with a
special interest in the use of artificial intelligence to meet
health needs in low and middle income countries.
Dr. Jill
Creighton [00:00:49]:
He also works in various ways to help DKU make the most of AI as an
educational tool, as well as assisting in the development of
policies regarding their safe, effective, and ethical use. So
today's episode is gonna be focused on the use of AI in higher
education. Daniel, welcome to student affairs voices from the
field.
Dr. Daniel
Weissglass [00:01:06]:
Thanks so much, Jill. Glad to be here.
Dr. Jill
Creighton [00:01:08]:
It's great to see you. Daniel and I have known each other for a
couple years now, and we're coming to you from across massive time
zone difference. I'm sitting here in the UK and Daniel's sitting
over there in China. So I can see the sun setting on his end and
the sun rising on mine.
Dr. Daniel
Weissglass [00:01:22]:
There's maybe a metaphor, appropriate to the the topic of today's
meeting about that.
Dr. Jill
Creighton [00:01:26]:
Oh, I'm excited to get into that. Daniel, you make your livelihood
as a philosopher. And so I always like to ask our guests kind of
how you got to your current seat, but we're speaking with you
mainly today because of your burgeoning career in academia and
artificial intelligence AI. So, yeah, tell us about how you got
here.
Dr. Daniel
Weissglass [00:01:46]:
Well, interestingly, I can kind of weave those 2 together.
Actually, part of what brought me into philosophy was an interest
in AI. Now this was back before the the big data science boom even.
So what AI meant at the time was a very different concept, one that
had more to do with replicating human capacities and building
something human like or assisting human in those performance is and
and less to do with something like the large scale statistics that
we see today. And the questions I kept finding myself asking was,
well, if we're gonna talk about replicating something like human
intelligence, I need to know better what that is. So I went into
study in the philosophy of mind, and I also double majored in
psychology. And I think the interest I have had in AI throughout my
career is part of what brought me to where I am at DKU. DKU, Duke
of Kunshan University, where I work, is a very interdisciplinary
institute.
Dr. Daniel
Weissglass [00:02:34]:
It doesn't really follow traditional divisional or disciplinary
divisions. We don't have departments. Right? We have these big,
houses. I say for the audience, Gil, you know all of this. And when
I presented my initial research presentation, it was actually about
not artificial intelligence, but artificial emotion and the
possibilities that might bring for things like moral control of AI.
So I think this has been sort of a natural path for me. And then
with the recent explosion in AI interest post large language
models, the place for someone who can think critically and with
some sort of baseline informative informedness about AI
technologies, about the values that we have in using those
technologies, has become more and more central to the mission of
academic institutions worldwide. And I was very fortunate to find a
community here that supports me pursuing that path.
Dr. Jill
Creighton [00:03:26]:
You mentioned something that I've not heard in these conversations
a lot, which is not artificial intelligence, but artificial emotion
or the the mimicization of human emotion in AI. And typically,
we're talking about AI ultimately being barely stupid because it's
only as good as what we input into it. Talk to us a little bit more
about that emotion component.
Dr. Daniel
Weissglass [00:03:47]:
Oh, yeah. Absolutely. So there's a small literature on what's
called synthetic emotion usually. And there are a couple of ways of
understanding what that means. One is being able to respond to
emotional cues of users appropriately. You can almost see this in
chat gpt when it says, oh, I'm sorry. I made a mistake. Right? And
that's important for a lot of reasons.
Dr. Daniel
Weissglass [00:04:07]:
One being that people are more likely to maintain systems that
apologize when they fail. This was an interesting set of studies on
that. But what I was more interested in is systems that try to
replicate the you might call input output mapping, the sort of
function in the mathematical sense that human emotions have. So
ideally, a system that is capable in this way would be able to, for
instance, look at an image and identify the emotion that that image
would produce in most viewers. So if it showed an image of a person
in suffering, right, it would it would identify that this would
produce sorrow or sympathy. And this is really important, this kind
of input output mapping, to producing morally correct responses in
some cases. Emotions play a huge role in how we make moral choices
and how we decide to respond to morally loaded events. And so
there's a hope that we can make artificial moral agents, is the
term that gets thrown around in the in the literature, that would
be able to adequately replicate these components of moral
reasoning, which I think must include something like emotion, so
that they can regulate themselves more effectively.
Dr. Daniel
Weissglass [00:05:10]:
Now, presumably, we wouldn't want to take people out of the loop
entirely. But if you don't ask them to regulate themselves based
upon these basic presuppositions that we have captured in our
emotional systems, you get behavior that can be very dangerous and
and very much outside of what we call alignment. You get systems
that are willing to lie, hate, steal, harass people, all of these
things. And so the hope was and and what I was working on at the
time, and it still is sort of on a back burner, is that synthetic
emotion might be able to improve this, provide some sort of safety
by allowing them to analyze allowing AI tools to analyze morally
loaded instances in a way that's more similar to the way that
humans do. There are a lot of challenges to that. But in context of
something like an academic environment, this might involve
something that's emotionally sensitive and responsive to student
users, for instance, right? So imagine as we've been kicking around
here the idea of an advisor bot. So you've got a first run chatbot
that interacts with students. You don't just want the chatbot to be
able to recognize the question and its meaning in a literal
sense.
Dr. Daniel
Weissglass [00:06:11]:
You might also want it to note certain emotional patterns that
could emerge in the way students are responding. Right? You might
want it to note that, the questions this person is asking and the
way they're asking them as the semester goes on really seem to
indicate that this the student's not doing so well. Right? And
maybe it could raise a flag there. Now this would be a, you know, a
much more complex system most likely than what we're dealing with
in the near future, but that was the idea. And I think synthetic
emotion is an under explored space in education in the same way
that emotion in general, in pedagogical context and advising
context is underplayed. Right? We focus so much on cognitive
expertise. We sometimes forget that this is broadly speaking a care
profession, and we underplay the importance of that sort of
emotional intelligence and emotional engagement, I think.
Dr. Jill
Creighton [00:07:00]:
For student affairs professionals, that's where we spend most of
our time is working with students in that high EQ space, in that
high empathy space. And the thought of having an AI bot to help us
support that work is a really fascinating one. On my end, I'm
teaching currently the technology module for masters in student
affairs through NASPA and LUMSU University. And I just had my first
lecture about a week and a half ago, and it was all about
introducing student affairs professionals to user and AI tools.
We're not talking about the technological side of machine learning
and how we're feeding large language models and things like that.
But really, what can these things do for us to help support our
work? Because at the end of the day, when we're working with
students, it's a human centered profession. And I don't believe any
sort of technological replacement that can get us to a place where
we don't need human interaction is the core of what we do in a
university setting. I think that there's cognitive development that
can happen through these bots or even quick answers.
Dr. Jill
Creighton [00:07:55]:
But when you're having a really hard day, talking to a bot is
probably not going to help you find a space of resilience or
thriving. But we also have, I think, jumped ahead quite a lot.
We're already speaking from a at least a novice perspective on AI.
So I want us to back up a little bit to, just give some primer and
basics for someone maybe who has heard of AI but has never tried
using a large language model. Maybe they're using predictive AI and
they don't know it. Maybe they're a little fearful of the tools
that are out there because they don't know much about them. So
let's start with the super basic, which is can you describe for us
the difference between predictive versus generative AI?
Dr. Daniel
Weissglass [00:08:34]:
So, I mean, the to some extent, the answer is in the name. So a
predictive AI is focused on predicting. This might be making a sort
of, like, quantitative prediction, right, where it says, you know,
given recent trends in the financial market, this is what we expect
these things to go. Right? You might see these even with very, very
simple forms, like expected grades could arguably be something of
this kind. They'd be very simple. So predictive AIs attempt to
predict. They might also try to protect things like categories. You
might look at patterns of enrollment or something and say, well,
this student is likely to be a major in philosophy.
Dr. Daniel
Weissglass [00:09:05]:
This student is likely to be a data science major. And that could
help you maybe plan staffing right down the road. They could also
maybe identify students who we've noticed if you had the adequate
data collection system. We've noticed this student has just missed
3 classes in a row across 2 different courses. Let's raise a flag.
Something might be worth noting here. This is now a high risk
student, maybe the classification would be. Generative AI works in
a very different way.
Dr. Daniel
Weissglass [00:09:28]:
What it does is is generate something. It produces, and there's
some debate about how novel the outputs of these things are, but a
novel output based upon usually a description of the desired
output. So you go in and you say draw me a picture of a bird, and
it draws you a picture of a bird. You go in and you say, and this
is the kind of thing that tends to worry academics, Write me a 10
page paper about the role of Rene Descartes' mind body dualism in
creating a a sort of, individualistic conception of the self which
results in all these problematic ways of viewing one's connection
to society. I'm getting down a path here. Sorry. I'll back up a
little bit. But you, you know, you ask it your your essay prompts,
it will write an essay.
Dr. Daniel
Weissglass [00:10:09]:
Not always a good essay, but they're getting better. And so what
generative AI does that predictive AI don't doesn't do is produce
novel outputs, a novel at least in the sense that they're not just
copy and pasting from somewhere. What raises concerns for
academics, things like turn it in won't work, at least not as
well.
Dr. Jill
Creighton [00:10:25]:
That generative AI space, I think, is that scariest space for
academia, particularly in the academic integrity front as you just
mentioned. But I think it also requires that we reevaluate how
we're assessing student learning. We've been relying on the essay
for 100 of years in terms of the way that we measure if a student's
critical thinking skills have evolved in the course that the way
that we want them to. But if I put this prompt into AI about my
image of self through the lens of Descartes, I could also ask it to
do a niamic pentameter, and it's gonna spit out something, but it
may also invent sources. It may also just make stuff up that is not
relevant. It could insert a number of different factors that, maybe
the end user doesn't know that it's inserting. But it's also going
to take what I input into that model and use it to continually
train. So none of my data is private when I go into these models
because it is collecting it and then using that to continue to
synthesize on its own.
Dr. Jill
Creighton [00:11:25]:
But I I think what the most interesting piece is to me is that
ultimately what we're looking at is math. We're looking at, how
these machine learning components are taking language, which is
ultimately just a variable for it, and then creating stories, full
stories. So when you think about where we are right now in this
moment in higher education, how do you believe that professors
should be looking at these language tools, large language model
tools in their work, in their assessment of student learning?
Dr. Daniel
Weissglass [00:11:53]:
Excellent question. I think this is the one that most faculty are
really struggling with. And I think there are a couple of things to
say. One is these tools are widely available and often without
charge, which allows for effectively every student to be doing what
what our wealthy students, who were maybe or or less scrupulous
wealthy students anyway, might have been doing in the past, which
is hire someone to write an essay for them. Everybody gets to do
that. Now that was always a problem. It always existed. But there
was enough of a barrier that we kind of just let it slide, I
think.
Dr. Daniel
Weissglass [00:12:24]:
At least many of us did. Now we must respond. And the way we
respond is gonna depend upon what your priorities are. Right? If
you want to know that somebody knows something off the top of their
head, you should be asking it in a classroom in front of you, maybe
with a proctored test in some cases, if you're especially dealing
with the increasing number of students who are dealing with with
things like remote learning in some way. So there's this sort of,
you know, if you wanna stick to the old old style and there are
places where that's the right thing to do, you need to be doing it
in person. But we also need to be thinking more broadly about what
the world we're preparing our students to engage in is going to be
like. These will be tools that they will have and be interacting
with for the rest of their life. All of us will be whether we like
it or not.
Dr. Daniel
Weissglass [00:13:05]:
And so we need to think about the ways that within our discipline,
we can utilize these tools both to leverage learning about our
disciplinary skill sets and our disciplinary topics, and also that
we can train students to use the tools. Right? So there's using the
tool to teach, for instance, philosophy, and then there's also
teaching students how to use the tool. We're kind of in the early
days of Google search again, where every class suddenly had to have
a discussion about how to use Boolean search operators and that
kind of thing. And while sometimes some of the stuff you're seeing
out there isn't really legitimate, you should know that people make
stuff up and lie online, and here's how we identify good sources.
And now we have to do that with generative AI systems. Right? You
should know they hallucinate is the term that gets thrown around.
Right? They they make up facts. You need to learn how to prompt
them in ways that help you avoid that.
Dr. Daniel
Weissglass [00:13:55]:
You need to learn which systems can be trusted for which kinds of
things and generally best practices. I'm most excited about using
them as tools to teach skills that are often labor intensive. So,
again, as a philosopher and particularly in teaching ethics
courses. Right? So there's a lot of skills that are important there
about analyzing the ethical dimensions of a given case, about
working through problems and reasoning effectively, and monitoring
students while they do that is a wonderful thing to do and is
possible in sort of a live action action way. But providing a chat
GPT, a custom GPT that's been written to prompt my students to go
through a certain set of steps, right, can provide them with maybe
not quite the same quality, but a much more available version of
this sort of prompt. Now I would never suggest that you could
replace your your assessment that way or replace your your direct
education that way. Right? There's still a place for for sitting in
the room with me and working through it because I might notice
problems that KKPT doesn't. But especially over time and with
practice, as we learn how to use these tools ourselves, we can
build these really cool interactive systems, sometimes called
interactive tutor systems in the older literatures that help
respond to our students where they are, guide them through
complicated processes, and really have a lot of promise.
Dr. Jill
Creighton [00:15:13]:
You said GPT a couple of times, so I just wanna clarify definition.
So when we say chat GPT, that GPT stands for generative pretrained
transformer. I think a lot of people don't know that it's an
acronym or just haven't gone to the depths of understanding what
that means. And so the generative pretrained transformer means that
it's taking the information that is already been fed that
pretraining component and then transforming forming it into the
output that we see as human beings. But we have different versions
of chat gpt that have evolved over time. 3.5 was the one that a lot
of people were using for a very long time. Now for Omni is out,
which is a paid service. And so for Omni is better for sure in
terms of the input it's been given and the output that it will give
you.
Dr. Jill
Creighton [00:15:55]:
And when we look at what students are doing, it's it's not
unaffordable to to become a paid member of, for Omni. And so you
can use that to your advantage. The models aren't necessarily at a
place right now where they can continuously self learn in the same
way that we might expect them to, like a human brain can. But the
information it's getting fed is is much more interesting these
days. I was teaching the use of GPT 3.5 in my course the other day.
And one of the things that I love about the course that I'm
currently teaching is that students come from a multitude of
countries. I think we had at least 7 countries represented in that
space. And so we also learned a lot about bias, in the prompts that
we were using and who trains the models, whose values are inputted
into the models, what assumptions are made.
Dr. Jill
Creighton [00:16:41]:
One of the examples we looked at is how to respond to a highly
critical email. And so what we had folks do is input the email into
chat gpt, and then on the other side, ask it to craft a polite and
salient response that covers these three points. We made sure to de
identify any names. If your institution has a confidentiality
clause of some kind, if you're trying to observe FERPA, you need to
be really careful that you're not putting student identifiable
information into these models because that data can be used. But
what we got spit out was an extremely Americanized version of what
that email can look like. And so it, again, raises the question,
who is it for and whose biases are integrated into the system? And
so the student that was representing work in Ireland said, I can't
use this because it's too American and it doesn't meet my cultural
needs. So we asked ChatCPT to transform that response to make it
more culturally Irish, which I was real scared of that prompt. I'm
not gonna lie.
Dr. Jill
Creighton [00:17:38]:
I thought we were gonna border into some very racist territory, and
we were breathing a bit of a sigh of relief when it transformed it
into something that the student identified as a little more usable.
We tried the same thing in Lithuanian, and it did not give us what
we needed because we had a student representing Lithuania. So the
limitations of these models is very real, and that happens for for
student learning as well. And I I think this is also true for
things like Copilot. So the I think the 2 most ubiquitous tools
right now are Chat GPT and Copilot for the everyday user. The other
one that I've recently really taken to is Gamma. Have you played
around with Gamma much?
Dr. Daniel
Weissglass [00:18:14]:
No. I don't think I have.
Dr. Jill
Creighton [00:18:15]:
Gamma is great. I actually designed my lecture using Gamma. It is a
tool that you can take a Word document of, like, just an outline
and upload it, and it will generate a PowerPoint for you based on
what you've put into the Word document. If you want it to, it will
also generate a ton of detail. Inocuous or not, it will also
generate images that sometimes are really funky, but we we can get
into images in a second. But what it did, I asked it to make a
sandwich as an example of of how to do this. So I put in 10 lines
of what I think are the basic instructions on how to make a
sandwich. And then you can choose if you want it to give you basic
output, kind of middle output, or thorough output, and it will just
go to town about sauces and vegetables and slicing and toasting
bread and types of cheese and things like that.
Dr. Jill
Creighton [00:19:01]:
So I think the sandwich example is an easy one because it can show
you what it will take, which is make a sandwich, take some bread,
add some veggies, add some protein, add some cheese, eat your
sandwich, which is basically what I gave it. They turned it into a
10 slide PowerPoint, elaborate, elaborate, elaborate PowerPoint. So
check out Gamma if you get a chance. What other tools are you using
that people should know about?
Dr. Daniel
Weissglass [00:19:20]:
So like you said, the biggest sort of general purpose are CAT KPTN
being Copilot right now. And they have got a different focus.
Copilot's really working to integrate to the Office 365 suite in
some interesting ways that I think have a lot of promise for
administration, especially at universities. As we've all been on
email chains with 45 professors and really, really wish that we
could have an instantaneous summary of what's been happening,
Copilot can do that. It can summarize everything that that that's
been going on. It can even summarize the text of ongoing meetings
less well, but from recordings identify what was said and give you
the bullet points. So I think the administrative side will see a
lot of Copilot in these applications in particular. Another
prominent sort of general model is gonna be Anthropic's Claude
model, which is like OpenAI's CATGPT in effect.
Dr. Daniel
Weissglass [00:20:06]:
And at various times, they pulled ahead of one another and which
one produces, in some sense, the best quality output. So these are
sort of the major commercial general use systems. There are
specialized systems. So I use one called site dotaiscite.ai. Maybe
I should ask for, like, a a free month.
Dr. Jill
Creighton [00:20:26]:
We are not sponsored by site.ai.
Dr. Daniel
Weissglass [00:20:27]:
No. We are not. Right? But what it does is solve the problem that
ChatTPD has or at least tries to solve the problem of making stuff
up. So it is designed to look around a large corpora of published
academic work and identify articles that relate to various topics.
It can even write you a sort of general overview of our topic that
has these articles. And for researchers like myself, when I when I
do research, it can be very helpful in a lot of ways. One is I need
to find a quick literature review, essentially. Give me 10 articles
that talk about this topic in contrasting ways, and it will
generate a pretty decent list.
Dr. Daniel
Weissglass [00:21:04]:
The other and the one that is very much time saving as I'm sure
you've encountered this too is, you know, when you're writing a a
long paper, you've read 45, 50, a 100 articles on some topic, And
you remember that one of them said something like this. And your
options are pretty much to control f and go through every document
looking for keywords. But if you get the keyword wrong, you're just
gonna have to keep doing this over and over again. So you can give
Cite a list of articles and ask it to make inferences based just on
those articles. And you can say things like, where which of these
articles is likely to have said something like this? And it can
give you some direction there. So it's been a very interesting
tool, and I think one that a lot of people in the academic areas
will will look at. Another thing to keep in mind is that there are
also open source versions of these tools. So things like Hugging
Face is is a prominent I know it's a weird name.
Dr. Daniel
Weissglass [00:21:57]:
A prominent provider of these sorts of sources which allow people
to make custom tools and tools that might protect data in ways that
are really important. So there are 2 ways to go. You keep you you
brought up the data security point which is really important. There
are 2 ways for an institution to go here. 1 is to work with a
provider to develop a data security agreement and to ensure that
your institutional data will not be used for training a model. We
can do that, and and institutions have done that. And I believe
Duke has done this with Copilot, can do this with CAT KPT, and
sometimes you'll set up a sort of private instance of one of these
models where you put it on a server that is sort of isolated from
the rest of the system. So this is one way that institutions can
handle the privacy issue.
Dr. Daniel
Weissglass [00:22:36]:
Another though is to build one in house. Now these models tend to
be not as well fine tuned. They tend to be based on sort of the
base model. So when we talk about chat to GPT, right, the GPT
refers to a foundation model, which is a general purse purpose
model, which can be used in various ways by various tools to create
whatever output you want. ChatGPT is a specialized tool made by a
specific company. Right? It's a packaging of that for sort of
client use. There's other ones like, Lambda is another prominent
foundation model. And so you can use one of those, take it yourself
to, be adequate to your purposes, but then you're going to be
dealing with the need to maintain that system more in house.
Dr. Daniel
Weissglass [00:23:15]:
You won't be automatically keeping up with improvements that are
becoming standard elsewhere in the world like you would with a
commercially mainstream model. And the process of fine tuning and
improving performance can be really expertise and time
intensive.
Dr. Jill
Creighton [00:23:29]:
You've mentioned prompting the models a couple of times. I think
this is an important point for us to get to. The philosophy I've
come to adopt after watching, you know, hundreds of YouTube videos
on how to prompt these systems well is garbage in, garbage out.
That is, I think, the best way that we can encapsulate how how to
prompt one of these systems. Meaning that the more specific that
you can get with your prompt, the more likely you are to get a
usable reply. And if you are putting in nonsense or garbage, you're
going to get nonsense or garbage back on the other side. So for
example, if I want to write a paper, and don't do this for your
academic integrity reasons, but if I wanted to write a paper on the
future of student affairs and I put in to the GPT program, write me
a paper on the future of student affairs, it's gonna go every which
way. But if I put in write me a paper on the future of student
affairs that covers the integration of artificial intelligence and
the replacement of human jobs with AI and make sure that it is in a
professional style and uses at least 10 sources, I'm gonna get a
much different output than if I just said that very simple thing at
the front.
Dr. Daniel
Weissglass [00:24:37]:
Learning how to prompt is an important part of learning how to use
these tools, both for us and for our students. Right? So this is
when I when I talked about the need to teach students how to use
these tools. I teach research methods in some of my classes that
are now based around the effective use of these tools. We need to
learn how to prompt them and how to interpret their output in ways
that are helpful. And there are a lot of different approaches to
crafting prompts that produce a sort of certain desired behaviors.
Generally, this is called prompt design, which can be contrasted
with prompt engineering, which has more to do with efficiency of a
performance of a system. But when you design a prompt, there's a
lot of different ways to do this. I use sometimes cane of thought
or instruct models, but the basic idea for both of these is to deal
with a problem that most of these systems have, which is that they
don't follow rules very well.
Dr. Daniel
Weissglass [00:25:26]:
So, again, let's bring back the case of adviser GPT. Right? If I
ask it, how do I major in philosophy without taking logic? It might
say, oh, go. Yeah. Here's how you would do it. You would have to
take all these other courses and talk to your adviser and get these
substitutions, but what it ought to say is you can't. The systems
are designed to be helpful. They will find you an answer even if
it's wrong unless you tell it not to. And so with careful sort of
design methodologies, you can say, okay.
Dr. Daniel
Weissglass [00:25:53]:
Well, you don't you know, first, you review the bulletin and look
for an answer to the question, then you craft an answer to the
question, then you make sure that the answer you are about to give
me is correct. If you don't if if you cannot find a citation in the
bulletin, do not give the answer and instruct me instead that this
is you don't know. Right? This is a really important thing,
actually, teaching them how to tell you they don't know. And so
prompt design really radically changes things. It's also one of the
things that makes it, in some sense, more dangerous for academic
integrity than people realize. It's very common for people to sit
down with this tool and take the vanilla out of the box ChatPpt and
say, well, I asked it my questions. It gave terribly bland answers.
That's fine.
Dr. Daniel
Weissglass [00:26:33]:
I would either be able to tell this or it wouldn't do well in my
class anyway. But a student who knows what they're doing could
upload your syllabus, the rubric for the assignment, any samples
you've given, could upload work they have written in the past and
say, match this style so that it's gonna sound like them writing
for you. And and that's a a thing we need to understand.
Dr. Jill
Creighton [00:26:53]:
I'm sorry. I I think this is a this is the critical juncture right
now of where especially student affairs is with academic integrity
and AI because a lot of universities put the AI and I use AI doubly
here because we say academic integrity is AI as well as artificial
intelligence. But the responsibility for academic integrity falls
into student affairs spaces at a large number of universities. So
what is your best pro tip on how to identify whether an essay was
generated by a large language model in part or in whole?
Dr. Daniel
Weissglass [00:27:25]:
So I I'll return to the analogy earlier that having your paper
written for you by CAT GPT is kind of like hiring someone to write
it for you. You will not, in most cases, be able to use automated
tools to identify effectively whether or not a paper has been
written by a large language model or Any Script generative AI. In
fact, OpenAI pulled their tool down. Now there's some word that
they might have a tool that does this, but the way their tool was
intended to work was specifically with reference to work it
produced. It would encode essentially a watermark in the way it
codes words that would be undetectable to most readers, but they
could detect in the statistical properties of the way the words are
related. But, of course, that wouldn't work if someone used Claude.
Right? If they used a different system, you no longer have that
watermark system. So my big my first message is do not rely on
automated detection of AI content.
Dr. Daniel
Weissglass [00:28:17]:
It will not work effectively based on my understanding, and you're
really risking unfairly penalizing students in ways that are not
productive. The second is talk to them the same way that you would
if you thought someone they had paid someone to write this paper.
Say, you know, I find that your your use of this, William's 1998
paper really interesting. How did you come across that paper? They
should have an answer, right, especially if this is reasonably
close for the period when they wrote it. Right? Or, you know, you
use this this term a lot. Can you explain to me what you mean by
this term? These kinds of questions can give you a better sense.
But in a lot of ways, we are now back to a period of academic
integrity that many of our younger faculty, including myself, have
never existed in before, which is there is not going to be
certainty any long. Right? I'm used to only penalizing students for
academic integrity when I went on Turnitin and went, oh, yeah.
Dr. Daniel
Weissglass [00:29:11]:
Or I, you know, pulled a phrase and went to Google and looked it up
and, well, there it is. Right? That's probably done at least for
the foreseeable future. So get comfortable with ambiguity, return,
and think very seriously about the standards of evidence that you
are using to assess academic integrity, what degree of certainty
you need to feel you have to feel confident in leveling certain
types of penalties, and understand that this is going to become a
more intensive investigating procedure than was often the case in
recent years.
Dr. Jill
Creighton [00:29:39]:
This is such a tricky space because I feel like it's a losing
battle for those of us who work in the academic integrity space of
whack a mole. Right? Is this one generated? Is this not generated?
And the tools are only gonna get better. So, again, the question is
really what are we trying to assess from our students and why are
we trying to assess it? And now the third question is how are we
going to assess it in a way that ensures that they're they're
learning? And so we we do have, generative tools that can do voice.
We have generative tools that can do writing. We have generative
tools that can do images, and these are all getting more
sophisticated. So, you know, in 5 years' time, there may not be a
discernible difference, and we will see what happens. Right now,
these programs can't do hands. It's the oddest thing.
Dr. Jill
Creighton [00:30:21]:
If you ask a program to generate you an image of a human hand, they
they somehow can't figure that one out. And the other one I saw
interestingly the other day was that no large language model can
correctly tell you the number of r's in the word strawberry because
of the way that the algorithm is broken down.
Dr. Daniel
Weissglass [00:30:38]:
So so there are some definite limitations. They're also bad at
teeth. Anything that requires them to see inside of what they're
doing, they are bad at. So they can't count, for instance. Right?
So this is the number of r's in strawberry. They'll often struggle
because they see that as a single unit and they can't crack it open
to look at what's inside of it. So they're just very confused. If
you ask them word counts, they can even struggle with that
sometimes.
Dr. Daniel
Weissglass [00:30:58]:
I think one thing you brought up is really important here is
academic integrity is associated with the kinds of assessments
we're using. And in fact, its function to some large degree is to
maintain the authenticity of those assessments. And a lot of what's
going to have to be communicated here is that we need to rethink
assessment like we said earlier. And this is going to push back in
some sense on faculty. Faculty need to be working with their
academic integrity units to understand what can still be
meaningfully assessed, what can still be meaningfully maintained in
the classroom. If you really need to know that a student knows the
date of some event or can analyze some text off the top of their
head. Again, you should be doing that in class where such concerns
are simply gone. Right? The blue book is gonna make a fierce
comeback, I predict.
Dr. Daniel
Weissglass [00:31:44]:
So, otherwise, we need to just be more critical about the kinds of
assignments we use. A lot of us are operating on tradition. Same
things with how we understand academic integrity is largely
influenced by a long academic tradition that operated under a form
of intellectual productivity that may no longer be a form that we
will be operating under. And so we need to adapt to those changes
even in concepts as basic as what does it mean to have academic
integrity and what am I doing in this ethics class.
Dr. Jill
Creighton [00:32:11]:
Which is an ethical question. Indeed. Very meta. Well, Daniel, this
season's theme is the past, present, and future of student affairs.
And knowing you're on the faculty side of things, you might have an
interesting perspective for us. So I'm gonna ask you our 3
questions on our theme for the season. So focus on the past, What's
one component of the history of the student affairs profession or
tradition that you think we should continue to carry forward or to
let go of?
Dr. Daniel
Weissglass [00:32:33]:
I think looking at the things I wanna carry forward is again, I
think I said this a bit earlier, but I I think of teaching and
student affairs and the university as a whole more and more of as a
care profession and as a mentoring process. This is work intensive.
It can be exhausting and frustrating, but I think it's the
important thing. And this is something we see, I think, taken more
seriously in some sense in prior iterations of what the university
meant. Now in some sense, that's because they didn't have the large
class sizes that we're dealing with. They didn't have huge
universities that sprawled in this way and had as their mission to
bring education to a large number of people. Right? Education was
an elite thing. But if we can capture that, that sort of deep
powerful connection, that deep mentoring, then we still have a
value add.
Dr. Daniel
Weissglass [00:33:24]:
Right? Then we're still contributing somehow to their development
as a person. And in a sense, answering the question that the
university has been facing since the dawn of the printing press,
which is if I can just go read this there, why do I need to talk to
you?
Dr. Jill
Creighton [00:33:36]:
Conan Gutenberg. Okay. Alright. So our question on the present.
What is happening in the field of student affairs or higher ed
right now that's going well for student affairs in general?
Dr. Daniel
Weissglass [00:33:45]:
I think we're becoming much more aware of campus cultures and the
way that they need to be maintained effortfully. There are debates
and reasonable ones to be had about what exact boundaries we want
to set on our cultures, but I think for much of our history, we
haven't really been engaging with that question as substantively
and as effectively as we have recently. To connect back to the AI
concern, one thing that we need to think about very seriously is
that these tools not only enable academic integrity violations, but
student conduct violations of the kinds that we may never have
seen.
Dr. Jill
Creighton [00:34:17]:
Automated harassment is a very real possibility now.
Dr. Daniel
Weissglass [00:34:17]:
We've already seen this in some it in the broader world. We've seen
things like revenge porn being fabricated with AI tools. We've seen
falsified videos and audio using other people's voices. These are
questions that we are gonna have to start figuring out both how to
protect our students from administratively, right? What standards
of safety and security we put in play and also how we react to
these sorts of things.
Dr. Jill
Creighton [00:34:47]:
And moving towards the future, in an ideal world, what does the
field of student affairs need to do to thrive going forward?
Dr. Daniel
Weissglass [00:34:54]:
Oh, that's a big question. I suppose, I mean, the short and easy
answer is to continue to focus on students. I do think there's a
version of focusing too much on students that can be problematic
for universities where we become too customer service oriented. We
need to avoid that. I think the analogy that I find more effective
is the gym. Right? Which is, look, we're here to help you learn,
help you grow, but you have to come and still have to do the work.
Right? We're not gonna lift the weights for you. And so I think
student affairs institutionally and, you know, faculty as well, we
need to think a lot about how to promote and prepare the student
for the world world that is coming.
Dr. Daniel
Weissglass [00:35:26]:
And that is always changing. Right? And in a sense changing maybe
faster these days than it was in history. And so maybe it made
hitting a sense of flexibility and continual check-in and continual
responsiveness is an aspect of this. So like a continual reflective
response to students needs and the likely future realities that
they will face. That might be my answer, I think.
Dr. Jill
Creighton [00:35:47]:
It's time to take a quick break and toss it over to producer Chris
to learn what's going on in the NASPA world.
Dr.
Christopher Lewis [00:35:53]:
Welcome back to the NASPA World! I'm really excited to be able to
share some of that with you today. Every October, NASPA celebrates
the profession of student affairs. It's a month long celebration of
careers in student affairs. In this month long celebration, the
NASPA community comes together to share knowledge, network, and
uplift the student affairs profession. There's a number of great
activities that are happening throughout the month that you can
take advantage of, that you can get involved in and encourage you
to go into the NASPA online learning community to check out all of
the resources that have been brought together in one place for
careers and student affairs month. And think about ways in which
you can talk about our career with people on your campus, with
undergraduate students, graduate students, and more. There's a
couple of opportunities for you to be able to submit proposals for
a few of the upcoming symposiums and institutes that are happening
within our community. The 2025 NASPA International Symposium
proposal submission deadline is October 15th.
Dr.
Christopher Lewis [00:36:56]:
The International Symposium serves as a dynamic platform for
student affairs professionals globally to share insights, engage in
meaningful dialogue, and network, as well as practitioners
interested in further developing their global competency skills.
The international symposium is happening on March 15th 16th, and
program submission deadlines are available on the NASPA website.
And you can do a proposal for a flash lightning talk, a general
intersession, poster session, or scholarly paper. Highly encourage
you to submit a proposal today. Also, the 2025 NASA Community
College Institute Institute proposals are due on October 18th. The
2025 Institute will focus on celebrating the achievements of
student affairs professionals, equipping new generations for
success in transforming the field through collaboration and
mentorship. As mentioned, the deadline for proposals is October
18th, and I hope that you will submit a program and help shape the
future of our profession. The NASPA public policy division award
applications are due October 12th.
Dr.
Christopher Lewis [00:38:02]:
The NASPA public policy professional award honors exceptional
leadership and commitment in student affairs through public policy.
Nominate a deserving colleague with a letter of nomination to
support letters and their resume. Don't miss this chance to shine
or to shine a spotlight on an exemplary colleague. Every week,
we're going to be sharing some amazing things that are happening
within the association. So we are going to be able to try and keep
you up to date on everything that's happening and allow for you to
be able to get involved in different ways. Because the association
is as strong as its members. And for all of us, we have to find our
place within the association, whether it be getting involved with a
knowledge community, giving back within one of the the centers or
the divisions of the association. And as you're doing that, it's
important to be able to identify for yourself, where do you fit?
Where do you wanna give back? Each week, we're hoping that we will
share some things that might encourage you, might allow for you to
be able to get some ideas that will provide you with an opportunity
to be able to say, hey, I see myself in that knowledge
community.
Dr.
Christopher Lewis [00:39:14]:
I see myself doing something like that. Or encourage you in other
ways that allow for you to be able to think beyond what's available
right now, to offer other things to the association, to bring your
gifts, your talents to the association and to all of the members
within the association. Because through doing that, all of us are
stronger and the association is better. Tune in again next week as
we find out more about what is happening in NASPA.
Dr. Jill
Creighton [00:39:44]:
Chris, thank you so much for another great addition of NASPA World.
It's always great to know what's going on in and around NASPA. And,
Daniel, we have reached our lightning round where I have 7
questions for you in about 90 seconds. Are you ready to go?
Dr. Daniel
Weissglass [00:39:58]:
We'll find out.
Dr. Jill
Creighton [00:39:59]:
Alright. Question number 1. If you were a conference keynote
speaker, what would your entrance music be?
Dr. Daniel
Weissglass [00:40:04]:
Time For Tea. I don't know. It's a weird song I really like.
Dr. Jill
Creighton [00:40:07]:
Number 2, when you were 5 years old, what did you wanna be when you
grew up?
Dr. Daniel
Weissglass [00:40:11]:
A father, husband, and a good man.
Dr. Jill
Creighton [00:40:12]:
Number 3, who's your most influential professional mentor?
Dr. Daniel
Weissglass [00:40:15]:
Walter Sinnott Armstrong at Duke.
Dr. Jill
Creighton [00:40:17]:
Number 4, your essential higher education read.
Dr. Daniel
Weissglass [00:40:20]:
Why don't students like school?
Dr. Jill
Creighton [00:40:21]:
Number 5, the best TV show you've binged lately.
Dr. Daniel
Weissglass [00:40:24]:
The Sopranos. A little out of date, going back to the classics
there.
Dr. Jill
Creighton [00:40:27]:
Number 6, the podcast you've spent the most hours listening to in
the last year.
Dr. Daniel
Weissglass [00:40:31]:
Recently, it's going to be History of Philosophy Without Any Gaps,
a fantastic podcast for anyone interested in the history of
philosophy without any gaps.
Dr. Jill
Creighton [00:40:38]:
Number 7, finally, any shout outs you'd like to give, personal or
professional?
Dr. Daniel
Weissglass [00:40:42]:
I oppose to everyone here at DKU who's been so responsive and
helpful as we move forward towards an AI enabled future. We really
had a lot of people who've been supporting these kinds of efforts.
Noah Pichis and and Ben van Overmeijer, has been engaged in a lot
of I I would have I would have to think, Ying Chong. Really, just
everybody here has been very on board, I feel like, with this
effort. And, you know, that's been very influential in getting this
going.
Dr. Jill
Creighton [00:41:05]:
Well, Daniel, it's been a pleasure to speak with you on this topic.
I think this is a conversation we're gonna continue to have in
higher education for many, many years to come. If anyone would like
to connect with you on your expertise on AI or philosophy, how can
they find you?
Dr. Daniel
Weissglass [00:41:17]:
So the easiest way is to email me at
dew34@duke.eduordaniel.weissglass@dukecoonshaun.edu.cn. You can
also find me on my website, danielweissglass. That's danielweiss
blast.com. It's just my name, which I suppose will be in the show
note.
Dr. Jill
Creighton [00:41:37]:
Those show notes are partially generated by AI.
Dr. Daniel
Weissglass [00:41:40]:
Fantastic. And I really am happy to talk about any of this stuff,
and I expect to have even more interesting things to say in the
near future. There's some interesting stuff happening here, and I
think we'll we'll soon be in a position to continue the
conversation.
Dr. Jill
Creighton [00:41:52]:
Well, Daniel, again, a pleasure to have you on the show to talk
with you about this area of subject matter expertise, and thank you
so much for sharing your voice with us today.
Dr. Daniel
Weissglass [00:42:00]:
Thank you, Jill. It was a lot of fun.
Dr. Jill
Creighton [00:42:06]:
This has been an episode of SA Voices from the Field brought to you
by NASPA. This show is made possible because of you, the listeners.
We continue to be grateful full that you choose to spend your time
with us. If you'd like to reach the show, you can email us at sa
voices at naspa.org or find me on LinkedIn by searching for doctor
Jill L Creighton. We welcome your feedback and your topic and guest
suggestions. We'd love it if you take a moment to tell a colleague
about the show and leave us a 5 star review on Apple Podcasts,
Spotify, or wherever you're listening now. It truly does help other
student affairs pros find the show and helps us to become more
visible in the larger podcasting community. This episode was
produced and hosted by doctor Jill Creighton.
Dr. Jill
Creighton [00:42:44]:
That's me. Produced and audio engineered by Dr. Chris Lewis.
Special thanks to the University of Michigan Flint for your support
as we create this project. Catch you next time.