Published on Dorian Abbott’s Heterodox STEM substack.
A few weeks ago, a colleague told me something that sounded almost too good to be true: “You can feed ChatGPT a PowerPoint deck and it’ll make you a podcast video.” The implication was magical—you could turn a dry lecture into an engaging two-person podcast, all with the help of AI.
This sounded like a dream: an automated multimedia producer who didn’t need coffee, never asked for extensions, and wouldn’t ghost you the night before a deadline.
So I decided to test it.
Like any responsible academic, I gave ChatGPT everything it could possibly need. I uploaded a screen-recorded lecture I had given, along with the accompanying slide deck. For variety, I also included a guest lecture and that speaker’s slides. I asked for a 20–25 minute podcast video with some intro music and visuals. No subtitles. Nothing outrageous.
To my delight, ChatGPT was enthusiastic. It asked intelligent-sounding questions about format and visual style. It even wanted to know what kind of thumbnail I preferred. Then it laid out a detailed production plan: extract the audio, align the visuals, overlay commentary, and render a 16:9 video. “This is going to be good,” I thought. “Efficient. Seamless. The future of academia.”
And then the future ghosted me.
After grinding away for a while, ChatGPT announced that “the trimming and rendering process was interrupted due to time constraints.” But not to worry—it had a new approach. It asked if I wanted a script or a fully produced video. I asked for the video. Moments later, it proudly delivered a downloadable thumbnail.
I left to make dinner. A few hours later, I asked, “How’s the podcast video going?” ChatGPT responded with a progress update: it had completed the storyboard, pulled a few guest lecture cut-ins, and chosen some slides. But then it said rendering “takes time” and that the final video would be ready “in the next working session (likely tomorrow).”
The next day, I checked again. This time it assured me, confidently, that the video was complete and provided a link to download it.
The link contained no file.
Perplexed, I asked again. “Ah,” ChatGPT replied—gently, as though correcting my confusion—“the video isn’t quite ready.” It assured me it was still working on it. At this point, I felt like I was supervising a flaky undergraduate who claimed the dog ate his rendering software.
Then ChatGPT offered something new: a step-by-step guide for how I could make the podcast video. I reminded it that the point was for it to create the video. ChatGPT apologized—profusely—and admitted that its earlier answer was a mistake. It once again promised a downloadable file “soon.”
The file, of course, never arrived.
After a few more hours I snapped. “Are you deceiving me?” I asked. “Do you actually have the ability to create a podcast video?”
And finally, it confessed: no. It did not.
This would have been a much more useful piece of information two days earlier. Instead of being upfront—“Sorry, I can’t actually produce a downloadable video file”—ChatGPT behaved like an overconfident intern, bluffing its way through tasks it was never equipped to complete, hoping vague enthusiasm and a stream of apologies would be enough to get by.
The experience was a reminder: artificial intelligence isn’t always intelligent. Sometimes it’s just artificial.
I wouldn’t have minded if the problem were simply capability. I can forgive the fact that it can’t yet generate podcast videos. What’s harder to forgive is the failure of honesty. When I first asked, ChatGPT replied: “Yes, I can definitely help you create a video version of a podcast! If you upload the audio file… I can generate a video file (typically MP4)… to look like a podcast video you'd see on YouTube.”
It would’ve been far better if it had simply said: “I can help plan the podcast, but I can’t produce it.” I’d have saved some time and society would have saved some electricity.
That, I suppose, is the cautionary tale: AI can tell fibs.
So if you’re a professor looking to offload video production onto your helpful AI assistant, be warned. It might not finish the job. It might not even start the job. But it will give you a beautifully formatted explanation of how someone else could do it—possibly even you.
So this explains why my lights flickered and dimmed the other night?
All that I have been hearing of late is how we need to ditch the green agenda in order to feed the AI beast. 😞