Information Technology Infrastructure Faculty Co-Directors Mark Langanki and Carl Folstad answer four questions about ChatGPT and machine intelligence
By now, most everyone has heard about the artificial intelligence tool, ChatGPT. It was only released in the wild last November and quickly became all anyone could talk about. To find out more, we reached out to the faculty co-directors of the U’s Information Technology Infrastructure program (ITI), Mark Langanki and Carl Follstad. Mark has been the faculty director of the program since 2014 and has a background in computing as it relates to telephony, networked applications, and distributed systems. Carl’s experience in IT is mostly around storage and operating systems.
After clearing up some confusion (or creating more?) about what ChatGPT is, they explained how the ITI program is responding and the implications of ChatGPT and machine intelligence for the rest of us.
What is ChatGPT?
Mark: Ironically it all started with a Google white paper and source code released to the public where OpenAI took it and ran with it. GPT stands for generative pretrained transformer, which produces unique and dynamic output based on text-based input. “Pretrained” means that it was initialized with a dataset called Common Crawl that was created from posts, web pages, books—more than 60 million domains over the past 12 years—meaning it copied all the world's documented works.
It’s not that it is crawling the web like search engines, but it has the ability to leverage written work to identify how things have been described in the past. These works are leveraged in how GPT “thinks,” in a way, so that different models (versions) have different parameters to use when creating output based on input from a human. GPT-3, made famous in 2022, uses 175 billion parameters, whereas the GPT-4 uses 100 trillion parameters!
How does ChatGPT work?
Carl: It’s a kind of machine “intelligence” that’s available for anyone to use. I don’t like using human words to describe computers, hence the quotes. ChatGPT is a processing tool driven by artificial intelligence (AI) technology that allows you to have human-like conversations with the AI.
How the developers built the database was they essentially vacuumed up all this relatively unstructured information from the internet. Then they fed it questions and answers—hundreds, maybe thousands of times—where it began to “learn” or amass information based on pattern recognition and training. It “learns” correct answers to some queries and incorrect answers to others. And it does all that through some very clever algorithms.
On top of that, they put up a natural language processing front end. So it can parse whatever information it finds into natural-sounding speech. ChatGPT then assembles this massive amount of disparate information into one relatively cohesive response. That’s the magic of this thing. And it’s able to build upon and expand on that afterwards and can actually make some inferences. It can draw some conclusions. Those are the answers that it delivers when you ask it a question.
How is the ITI program responding to the prevalence of ChatGPT and other tools like it?
Mark: As we are learning more about the power of GenerativeAI, it is clear that there are reasons to fear it and ways to embrace it. The fear is not so much a fear in how we view its abilities outside of evaluation of students' work for a letter grade. In learning there are various ways, or levels, of instruction based on previous education. Taking a few pages from Bloom’s Taxonomy, we have:
- Knowledge – base level and the ability to remember previous material. Then we have
- Comprehension – the use of knowledge to explain more about it and how it relates, and then
- Application – where we would take a body of knowledge and use it in new and different ways to show a level of expertise.
If you think about the difference between knowledge and application, there is the difference in how we think about GenerativeAI. Knowledge is something you need to know in order to achieve higher levels of education. If you were to use ChatGPT, you will get an output that is probably pretty correct, but there is no learning. In application, we would find ways to encourage leveraging GenerativeAI to build a much broader and richer thought around the use of the material, hence enhancing the learning in a new way. We keep hearing about how we looked at the calculator similarly years ago and, although that is somewhat true, people still need to know how to add, multiply, divide, and subtract, which is base knowledge. When you move to higher level math, the calculator is a tool, like GenerativeAI, and should be leveraged.
Carl: We are beginning to look through the curriculum to determine where it would be appropriate to introduce a discussion about the relevance of machine learning and how it might begin to impact the topic of a specific course. Take data analytics: How can artificial intelligence help us improve our data analytics? Or with my course on operating system development: ChatGPT can write programs—so should we be teaching that in class? Or should we de-emphasize teaching the underpinnings of a computer language and begin to spend more time focusing on how to make ChatGPT write better software so we don’t have to?
From an ethics perspective, we already have introduced policies that explicitly state that students cannot give machine-generated responses where they are expected to create their own content. We’re also looking at where we can embrace it and ask them to use ChatGPT to answer an essay question and then explain where it fell short.
Will ChatGPT and similar tools take people’s jobs?
Mark: Yes! Well… some would argue those jobs could go away. But I believe they just advance and change to be part of a larger, more valuable way to create better output. We have seen that robots took away jobs at manufacturing plants. ConversationalAI is taking away some contact center agent positions, and there will always be ways that technology automates away some manual processes. But it is our job to identify how to elevate the human worker to the next level.
Carl: Probably—but that’s the march of progress. That’s why, as educators, we place a high value on continuous learning and growth. We train students to have ongoing marketability and flexibility and an ongoing ability to learn. That goes for everybody out there. Our willingness as humans to adapt is an indicator of whether or not we’ll succeed.