On AI usage in CS classes

I can't argue against existing policies and regulations concerning AI usage in CS classes. But if I am to design the curriculum of any CS class (data structures, for example, since I'm so extensively involved in this class), I would make the use of AI a front and central topic of the entire class. The more introductory a class is, the more AI should be encouraged and promoted.

Background

I am primarily focusing on the following types of AI usage:

Based on extensive conversations, personal experience, and observations, I think students typically use AI for the following purposes:

They reveal what students find cognitively expensive, where instruction is unclear, and where they find their time least meaningfully spent.

What AI brings

As much as I'm reluctant to admit, the purpose of CS education is, largely, to prepare students for software development careers, so what we teach should be aligned with the skills that they need. I like comparing rejecting AI to people who rejected IDEs, code completion, and syntax highlighting 20 years ago, but I also think it's a completely different paradigm shift. The former is fundamentally shifting the way we interact with code. It's time to train our students to be proficient in this new paradigm, rather than teaching the new dog old tricks. As I've said:

If students need to be "forced" into learning something by taking existing tools away from them, it's time to reconsider if said thing is worth learning at all.

Most people—myself included—are terrible at beginning some task. I used to have this terrible habit, where I (as team lead) would ask someone to write a piece of code or writing, read it, and then write my own version without even copying theirs over. Building this mental scaffolding (or physical even, doesn't matter) significantly reduces this activation barrier and helps students see the full picture at a distance.

It transforms a "code-writing" task into a "code-reading" task. In my (unpopular) opinion, reading code is a more fundamental skill than writing code. (At least, people say that "code is read more than it is written".) It is also more transferable across languages and paradigms. By seeing the code that AI generates, students can learn new syntax and idioms that they might not have encountered before.

Let's take the scenario to its extreme and say the student blindly accepts the AI-generated code without even looking at it. How often this happens aside, I don't think it's a problem. Coding in 2025 is increasingly shifting towards "prompting" rather than "coding". If the student can then make all tests pass, fix style issues, and refactor for clarity, they have learned much more practical skills than if they had written the code from scratch (probably without a good understanding of what "good code" should look like in the first place).

What we have done wrong

The data structures class I'm involved in has strict policies against AI usage installed and enforced. Students accused of AI usage are penalized the same way plagiarism is. I think this is problematic for a multitude of reasons.

I don't know how much of the policy is driven by FUD and how much is driven by genuine pedagogical concerns. But I think it's clear that the current approach is not working and is doing more harm than good. I have discussed this with many people in the same trench, and while I don't have a solid proposal of what it should look like, I at least have a few directions in mind:

Of course, a lot of these directions would completely upend the current curriculum and require a lot of work to implement because they place a lot of responsibility on the instructors to design meaningful projects and assessment criteria. But I think it's worth it in the long run. When the dark cloud of AI replacing human programmers is looming, humans shouldn't try to compete with AI on its own turf; instead, transform ourselves to be the kind that complements AI. This starts with education.