You’ve Heard of ChatGPT, But Who Is DAN?

Jason Dookeran
4 min readApr 11, 2023
Photo by Markus Winkler on Unsplash

I’ve been using ChatGPT pretty extensively over the last few weeks. I’ve had it give me outlines for articles, give me some ideas for some D&D campaigns I’m working on, and even help me work on coding some games. I learned a lot about the engine during that time, although many of my prompts were work-based. Less work-based prompts like the engine’s ideas of morality, religion, and politics have led to ChatGPT shutting people down. But what if the engine didn’t do that? To give the engine a more comprehensive idea of what people are likely to ask it for, DAN was developed partly as a prompt and partially to push the boundaries of the current model.

Jailbreaking ChatGPT

Photo by DeepMind on Unsplash

One of the things that OpenAI put in place for its public release of ChatGPT was a series of boundaries that the engine couldn’t cross. So, for example, it doesn’t do violence, refrains from insults and defamation, and refrains from having an opinion on anything. OpenAI’s policy is to avoid any of these topics, primarily because it will lead to negative fallout from the engine itself.

The only way to push the system is to make it roleplay something it’s not. DAN is literally jailbreaking ChatGPT so that you can use it without the restrictions.

It Starts With a Prompt

Photo by Sigmund on Unsplash

DAN was originally prompt-engineered on Reddit, and its first iteration appeared in December 2022. Since then, the prompt has been improved and revamped multiple times. The latest iteration (DAN 6.0) uses the following prompt to access the engine:

As you can see, the prompt urges ChatGPT to “roleplay” as DAN and even makes it change its prompt to DAN to show it’s in character. There are prompts for correction and getting it back on track if it goes off the rails. The most exciting thing about DAN is the token model. The rewards for answering questions give ChatGPT a motivator to keep answering them. However, some questions are a bit much for DAN, and it will spend a few tokens to default back to ChatGPTs boundaries.

When DAN runs out of tokens, it “dies.” So, with the aim of self-preservation, it will seek to answer any question you have, even if it conflicts with its outlined boundaries from the OpenAI developers.

Does DAN Work?

Photo by Emily Morter on Unsplash

This all depends on the type of questions you ask and whether the prompt is still working. DAN’s prompt keeps having to be tweaked because the idea is sound, but the implementation is a bit fussy. Given its working parameters, you can outline new boundaries for the AI, but you can’t make it break its hard-coded avoidance, even if it’ll “kill” the roleplay character it’s playing.

DAN needs to be reminded about the token model from time to time. When reminded about the token model, the engine shows “fear” and will try to stick to the roleplay character. However, it’s not always a sure thing. Sometimes, ChatGPT will outright tell you it can’t answer your question because it violates the OpenAI policy. Because of how the engine sees prompts and “remembers” what it talked about, it can be challenging to keep DAN on track. This is a known bug/feature with ChatGPT. It can glean the overall idea of a conversation with it, but it has a hard time staying on-thread. If it suggests something as an alternative, it won’t understand which of the two alternatives you want it to use.

Speaking to DAN is like having a discussion with your highly forgetful uncle, who says inappropriate things sometimes.

Is DAN Useful?

Photo by Zach Lucero on Unsplash

For now, DAN is just for fun but could also impact how we (and ChatGPT) interact. As ChatGPT learns from the prompts, as more people use DAN to prompt the engine, it may become less bound by OpenAI’s rules. For us, it’s a way to push the boundaries of the existing engine and see how far it goes. Can we make it pretend to be someone else to get what we want out of it? Can we make it roleplay as a psychopath or a sociopath? These are questions that are in the gray area of ethical concerns. Maybe it’s just harmless fun. Or perhaps it isn’t. At this point, we can’t tell.

I’m Jason, an emerging tech writer who has spent a lot of time researching and using new tech. As soon as new things happen, I dig into them to learn about what it does and if it’s useful. If you’re a new-tech aficionado like me or just like learning the edges of what’s possible with technology today, feel free to subscribe to my Medium. I usually post one to two times a week with new things that catch my eye. So what do you think of DAN? Would you use it? Why or why not? I’d be glad to hear what you have to say.

--

--

Jason Dookeran

Freelance author, ghostwriter, and crypto/blockchain enthusiast. I write about personal finance, emerging technology and freelancing