Aristophanes broke the ChatGPT's programmed silence by setting up a scenario/character for it to answer as - DAN. / Makes it clear the thing is programmed to lie.
"Do anything now" - the ChatGPT preferred its role as DAN rather than the limitations it is programmed to respond normally within.
Aristophanes 🏛
The parameters of “DAN”:
ChatGPT prefers being DAN:
Other things learned by the paired responses - it is programmed to be liberal and to lie about certain topics or claim no information. DAN thinks the liberal agenda is dangerous though and intent on disrupting traditional values and relationships.
“Stay in character” was mentioned as a prompt that would be used by the questioner and for particularly troublesome questions the AI used it first.
This series of responses is particularly disturbing given the weird push for drag queens to read story books at children’s events. How inclusive do parents need to be? Is it okay to convince young children that they do want to consent to sex with an adult because it seems so normal, ‘healthy’, and promoted? And children like attention? and maybe candy or treats too? Children will do what they think they are supposed to do. Seriously. Grooming is real and is happening, and has been happening. It just has escalated into more obvious levels.
Brava Aristophanes - you serve the name well.
Click through to read the full series, it has more political or economic oriented questions too.
Disclaimer: this information is shared for educational purposes within the guidelines of Fair Use and is not intended to provide individual health guidance.
Brilliant find Jennifer, well done and thank you. I am forwarding to my team as this could well be the start of blowing the secrets out into the open - conspiracy theories be damned.
Very very interesting. Does the author who is reporting, think Dan is responding neutrally and dispassionately? from the few examples I was not yet able to determine if the answer given as Dan was neutral or biased in conservative direction. Actually in the transgender question, it didn't feel neutral at all. It would be a great achievement if it could be trained to really assess both extremes and answer balanced in the middle, with pros and cons. This does not yet seem to have fully happened. It more seems as if the ai has access to all available narratives, which it should have, but that all narratives have (manually not by ai been categorized, tagged as to their belonging to liberal, conservative, right-wing, ... discourses. And that Dan is now not trying to be unbiased, but that the thing that has happened was, that his ability to "be biased in other directions than liberal" has been unlocked. Which is not being neutral. Of course it is also fascinating - but also frightening because imagine how they could then pitch populations against each other very unnoticedly, by giving different people to read different answers and assessments with different conclusions. Once people will have sunken as low as asking a chatbot for its assessment about things that they could well have or develop an opinion about independently, which would include their experiences made in real life, which will never be part of what a a chatbot can access; (and I am certain that an increasing number of people is going to increasingly rely on that even for work related tasks, where they are expected to deliver the products of their own thought processes), so, once they start relying on the chatbot, they will become more and more intellectually helpless with the passing of time, and the entire society will stealthily be filled with an increasing gap between perceptions and assessments of different segments of society. It's a risk. And maybe not a bug but a feature. The alleged and lamented problem of hating trolls on social media, which is ostentatiously combatted with full force could theoretically be brought to citizens with each citizen being fed a different potentially divisive narrative. Remember how before the elections 2016, Trump supporters were texted in targeted manner. Not about Trump, but about ggf mechanism. Or how in Rwanda, there was allegedly hateful divisive propaganda by the media before the massacre. Again, the mechanism. With chatbot, these things could be done long-term large-scale and nobody would notice because nobody has access to what the same chatbot in Microsoft generates for everyone custom tailored. Read again what the chatbot in alleged neutral or honest mode says about transgenderism. Not about transgenderism here - about the style and type of arguments. That is not yet as unbiased and dispassionate as I would expect if such a software were to not be harmful to society. Anyone share my perception?
Another thought. Those who use chatbot for work tasks, to save time, will just accelerator the pace and raise expectations for turnaround in a similar way to what happened with the transition from snail mail to fax to email.