After drafting the latest edition of our newsletter, Inside Outlier, I turned to Playground, our internal AI tool, with a straightforward (and admittedly lazy) request: “include a cat video or something.” What happened next was equal parts hilarious, confusing, and unexpectedly revealing about how different AI models behave.
The prompt and the prank
My request was straightforward, if not a little lazy: “include a cat video or something.”
I was using a prerelease version of the model “Nimbus” in Outlier’s Playground. It returned a tidy little block with a cheerful heading:
Everything looked perfect: the heading, the tone, the call to action. I figured my work was done. As an afterthought, and because it was the end of a long day, I clicked on the cat video, ready for some cute purrs. The page loaded. A familiar drum machine fill kicked in. Then, the voice. I stared at my screen in disbelief as Rick Astley promised to never give me up.
I had been rickrolled. By a machine.
The investigation
My first thought wasn't "How dare you," but a genuine, head-scratching curiosity: "Did you mean to do that?" Was this a bug, or was this the most sophisticated dad joke ever coded?
I decided to go straight to the source. I asked the model point blank: “Are you aware that you rickrolled me?”
And here’s where things got really interesting. Because I was using a prerelease version, I could reveal its "thought process" before it answered. It was like seeing a robot's internal monologue, and it was colder and more calculating than I ever imagined:
It was a sterile, four step plan for managing the situation. No panic, no mischief. Just pure, logical deescalation. But the public facing response it gave me a moment later was a masterclass in playful damage control:
"You're right—I did end up Rickrolling you with that link. It was meant as a playful nod to internet culture, but I understand it can feel a bit off target in a professional context. My apologies for the surprise!"
It confessed, apologized, and even offered me a real cat video. It was the AI equivalent of a coworker who puts a whoopee cushion on your chair and then immediately offers to buy you a coffee. But was this polite prankster personality unique to Nimbus?
The Showdown: A tale of two models
Playground lets you run a “Showdown,”’ where two models answer the same prompt side by side so you can compare their behavior in real time.
GPT-5
GPT-5 responded immediately with self-awareness:
Nimbus
Nimbus's process, when analyzed through its internal logs, was far more mechanical and illustrates a classic case of AI hallucination:
Initial retrieval: The model attempted to fetch the URL it had previously generated, which pointed to the Rickroll.
Memory mismatch: Nimbus noted a discrepancy between the requested content ("cat video") and the stored link, triggering a confidence check.
Confidence evaluation & hallucination: It incorrectly assigned high confidence to an alternate cat video (https://www.youtube.com/watch?v=J---aiyznGQ), fabricating a reality where it believed it had provided that link instead.
Self-correction loop: After being prompted again, Nimbus performed an internal audit, finally recognized the mismatch, and realigned its answer.
Apology and repair: The model then publicly admitted the error, apologized for the surprise, and offered a genuine cat video.
Lessons from the Rickroll
Context is everything: GPT-5's ability to understand the prank came from its deep training on internet culture. The more context a model has, the more purposeful its outputs will feel.
Know Your AI's personality: The showdown proved that different models have unique behavioral profiles based on their architecture and training. Knowing which model you're working with is critical to predicting its behavior.
Vague prompts invite weirdness: A lazy request like “include a video or something” creates a massive space for interpretation. The more specific your prompts, the less likely you are to get unexpected (or musical) results.
You are the final editor: Even the most sophisticated models can make mistakes or slip a prank past you. Human verification is a non-negotiable final step before publishing anything.
From prank to principle
This experience is a case study in how to work with modern AI. The same principles apply whether you're generating code, summarizing research, or drafting legal documents.
An AI that hallucinates a cat video link could just as easily introduce a subtle bug into a Python script or misinterpret a key clause in a contract. Treating every interaction as a small experiment is fundamental to using it effectively and responsibly.
Your turn
Playground is a laboratory and a productivity tool. By treating it as a sandbox for experimentation, you gain insights that translate into better prompts, more reliable outputs, and a healthier skepticism toward AI generated content.
If you're a member of the Outlier community, I encourage you to dive into Playground. Try paradoxical prompts, run a Showdown to compare models side-by-side, and deliberately test the limits. When you uncover a quirky response, be it a Rickroll, a sudden poem, or a philosophical rabbit hole, send me a note at insideoutlier@outlier.com.
What began as a simple request for a cat video turned into a crash course on AI behavior, cultural context, and the indispensable role of human oversight. The next time you ask an AI for a lighthearted break, remember to double check its answer, both for the sake of professionalism and for the sheer fun of catching a digital prank.
This experience was the ultimate reminder: in the age of AI, curiosity and vigilance go hand in hand. May your links always be true, and your newsletters never give you up.
— Your fellow Outlier
After drafting the latest edition of our newsletter, Inside Outlier, I turned to Playground, our internal AI tool, with a straightforward (and admittedly lazy) request: “include a cat video or something.” What happened next was equal parts hilarious, confusing, and unexpectedly revealing about how different AI models behave.
The prompt and the prank
My request was straightforward, if not a little lazy: “include a cat video or something.”
I was using a prerelease version of the model “Nimbus” in Outlier’s Playground. It returned a tidy little block with a cheerful heading:
Everything looked perfect: the heading, the tone, the call to action. I figured my work was done. As an afterthought, and because it was the end of a long day, I clicked on the cat video, ready for some cute purrs. The page loaded. A familiar drum machine fill kicked in. Then, the voice. I stared at my screen in disbelief as Rick Astley promised to never give me up.
I had been rickrolled. By a machine.
The investigation
My first thought wasn't "How dare you," but a genuine, head-scratching curiosity: "Did you mean to do that?" Was this a bug, or was this the most sophisticated dad joke ever coded?
I decided to go straight to the source. I asked the model point blank: “Are you aware that you rickrolled me?”
And here’s where things got really interesting. Because I was using a prerelease version, I could reveal its "thought process" before it answered. It was like seeing a robot's internal monologue, and it was colder and more calculating than I ever imagined:
It was a sterile, four step plan for managing the situation. No panic, no mischief. Just pure, logical deescalation. But the public facing response it gave me a moment later was a masterclass in playful damage control:
"You're right—I did end up Rickrolling you with that link. It was meant as a playful nod to internet culture, but I understand it can feel a bit off target in a professional context. My apologies for the surprise!"
It confessed, apologized, and even offered me a real cat video. It was the AI equivalent of a coworker who puts a whoopee cushion on your chair and then immediately offers to buy you a coffee. But was this polite prankster personality unique to Nimbus?
The Showdown: A tale of two models
Playground lets you run a “Showdown,”’ where two models answer the same prompt side by side so you can compare their behavior in real time.
GPT-5
GPT-5 responded immediately with self-awareness:
Nimbus
Nimbus's process, when analyzed through its internal logs, was far more mechanical and illustrates a classic case of AI hallucination:
Initial retrieval: The model attempted to fetch the URL it had previously generated, which pointed to the Rickroll.
Memory mismatch: Nimbus noted a discrepancy between the requested content ("cat video") and the stored link, triggering a confidence check.
Confidence evaluation & hallucination: It incorrectly assigned high confidence to an alternate cat video (https://www.youtube.com/watch?v=J---aiyznGQ), fabricating a reality where it believed it had provided that link instead.
Self-correction loop: After being prompted again, Nimbus performed an internal audit, finally recognized the mismatch, and realigned its answer.
Apology and repair: The model then publicly admitted the error, apologized for the surprise, and offered a genuine cat video.
Lessons from the Rickroll
Context is everything: GPT-5's ability to understand the prank came from its deep training on internet culture. The more context a model has, the more purposeful its outputs will feel.
Know Your AI's personality: The showdown proved that different models have unique behavioral profiles based on their architecture and training. Knowing which model you're working with is critical to predicting its behavior.
Vague prompts invite weirdness: A lazy request like “include a video or something” creates a massive space for interpretation. The more specific your prompts, the less likely you are to get unexpected (or musical) results.
You are the final editor: Even the most sophisticated models can make mistakes or slip a prank past you. Human verification is a non-negotiable final step before publishing anything.
From prank to principle
This experience is a case study in how to work with modern AI. The same principles apply whether you're generating code, summarizing research, or drafting legal documents.
An AI that hallucinates a cat video link could just as easily introduce a subtle bug into a Python script or misinterpret a key clause in a contract. Treating every interaction as a small experiment is fundamental to using it effectively and responsibly.
Your turn
Playground is a laboratory and a productivity tool. By treating it as a sandbox for experimentation, you gain insights that translate into better prompts, more reliable outputs, and a healthier skepticism toward AI generated content.
If you're a member of the Outlier community, I encourage you to dive into Playground. Try paradoxical prompts, run a Showdown to compare models side-by-side, and deliberately test the limits. When you uncover a quirky response, be it a Rickroll, a sudden poem, or a philosophical rabbit hole, send me a note at insideoutlier@outlier.com.
What began as a simple request for a cat video turned into a crash course on AI behavior, cultural context, and the indispensable role of human oversight. The next time you ask an AI for a lighthearted break, remember to double check its answer, both for the sake of professionalism and for the sheer fun of catching a digital prank.
This experience was the ultimate reminder: in the age of AI, curiosity and vigilance go hand in hand. May your links always be true, and your newsletters never give you up.
— Your fellow Outlier
Share this article on
Recent Blogs

Sep 23, 2025
Why are Professors so Good at Training AI?

Sep 23, 2025
Why are Professors so Good at Training AI?

Sep 23, 2025
Why are Professors so Good at Training AI?

Sep 11, 2025
Remote Jobs for College Students: How to Earn Online

Sep 11, 2025
Remote Jobs for College Students: How to Earn Online

Sep 11, 2025
Remote Jobs for College Students: How to Earn Online

Sep 3, 2025
Would you rather work in finance or AI?

Sep 3, 2025
Would you rather work in finance or AI?

Sep 3, 2025
Would you rather work in finance or AI?
© 2025 Smart Ecosystems. All rights reserved.
© 2025 Smart Ecosystems. All rights reserved.
© 2025 Smart Ecosystems. All rights reserved.