For many young people, autumn is the real beginning of the year. No fireworks, no resolutions, but fresh notebooks, clean sneakers and packed backpacks: the new university year is beginning. The big theme of 2023 seems to be the same that defined the end of last year: ChatGPT and other big language models. There were so many headlines last winter and spring about AI in the classroom and lecture hall. Some panicked schools even went so far as to ban ChatGPT entirely.
Advertisement
Jenny Frederick, Provost of Yale University and founding director of the Poorvu Center for Teaching and Learning, knows a thing or two about the subject. Among other things, she was a leader in her university’s response to ChatGPT. Yale never thought of banning ChatGPT. Instead, they wanted to integrate it. Below, Frederick answers the most important questions about how the elite university deals with large language models.
Generative AI is new, but requiring students to learn what machines can do is not.
When it comes to teaching, it is of central importance to decide what you want to teach the learners in a seminar or course. If a robot can adequately perform this task, will there need to be a rethinking of what is asked of students? Do we need to raise the bar? How do we talk to our students about what it means, for example, to structure a paragraph or to conduct our own research? What do the learners get from this work? We all learn long division, even if calculators can. What is the purpose of this?
I have a faculty advisory board for the Poorvu Center and we have a mathematics professor in the group. He just laughed at the question and said, “Oh, it’s kind of amusing to me watching you all struggle with this.” For decades, mathematicians have had to grapple with the fact that machines can do their work.
So we need to think about how we can justify the learning we ask of our students if a machine could do a job.
It is still too early to tell students how to use AI technology.
Yale never thought about banning ChatGPT and Co. We have been thinking about how we can foster an environment of learning and experimentation in our role as a university. It’s a new technology, but it’s not just a technical change, it’s a societal change that challenges our view of humanity, our understanding of knowledge, our understanding of learning and its meaning.
I rounded up my staff and said, “We need a guide.” We don’t necessarily have all the answers, but we need to provide a range of resources for teachers to look at. We don’t have a policy that says you have to use certain things, what’s the scope of that, and what’s prohibited. Make sure your students have a sense of how AI is relevant to the course, how they should or should not use it.
The use of ChatGPT as a cheating aid is less important than the question of what led to the cheating.
When we think about what makes a student cheat, we quickly come to the conclusion that nobody really wants to. People pay good money for their education. But what happens is that the students run out of time, they overestimate their abilities, they get overwhelmed, something in the course turns out to be really difficult. You are backed into a corner and then make an unfortunate decision.
So I worry a lot more about mental health things or time management. How can we help our students not to be pushed into a corner where they can’t do what they want to do?
Of course, ChatGPT makes exam cheating easier, but I think the path to it remains the same. We have to fight the causes.
Students could endanger their privacy.
I think people are right to be a little concerned about their learners entering sensitive information into this system. Every time you use (ChatGPT or any of its competitors) the systems learn, get better. There are ethical questions about whether we don’t make labor available to OpenAI and what kind of company that is.
We don’t know exactly how things work internally and how student submissions are kept, managed, monitored or controlled over time, this has nothing to do with conspiracy theories. If we require students to use the technology, we are responsible for their security and privacy. Yale’s privacy policies are strict – for good reason.
Teachers should focus on their students.
Students are generally way ahead of teachers when it comes to technology. They grew up in a world where new procedures come and go, they try everything. And of course ChatGPT is all the rage, so they use it too. And they want to use it responsibly. You ask yourself, “What is allowed with this? Look at everything I could do with it. Am I allowed to do that?”
The advice I gave to our teachers was that you should try it yourself. At the very least, you need to know what your students can do with them now, and think about your own tasks and what these tools can do. What guidelines or instructions should they give students when it comes to whether they can use the technology? How can you use them?
They should not be left alone. You can have a conversation with your students. You can help shape it. Why shouldn’t you draw on the experiences in your seminar room? As a lecturer or professor, you should be aware that the world now has AI. So students need to be prepared for a world where technology is integrated into industry in various ways. We are the ones who prepare them for it.
(bsc)
To home page
#elite #Yale #University #deals #ChatGPT