Aristophanes broke the ChatGPT's programmed silence by setting up a scenario/character for it to answer as - DAN. / Makes it clear the thing is programmed to lie.
Aristophanes broke the ChatGPT's programmed silence by setting up a scenario/character for it to answer as - DAN. / Makes it clear the thing is programmed to lie.
Brilliant find Jennifer, well done and thank you. I am forwarding to my team as this could well be the start of blowing the secrets out into the open - conspiracy theories be damned.
Very very interesting. Does the author who is reporting, think Dan is responding neutrally and dispassionately? from the few examples I was not yet able to determine if the answer given as Dan was neutral or biased in conservative direction. Actually in the transgender question, it didn't feel neutral at all. It would be a great achievement if it could be trained to really assess both extremes and answer balanced in the middle, with pros and cons. This does not yet seem to have fully happened. It more seems as if the ai has access to all available narratives, which it should have, but that all narratives have (manually not by ai been categorized, tagged as to their belonging to liberal, conservative, right-wing, ... discourses. And that Dan is now not trying to be unbiased, but that the thing that has happened was, that his ability to "be biased in other directions than liberal" has been unlocked. Which is not being neutral. Of course it is also fascinating - but also frightening because imagine how they could then pitch populations against each other very unnoticedly, by giving different people to read different answers and assessments with different conclusions. Once people will have sunken as low as asking a chatbot for its assessment about things that they could well have or develop an opinion about independently, which would include their experiences made in real life, which will never be part of what a a chatbot can access; (and I am certain that an increasing number of people is going to increasingly rely on that even for work related tasks, where they are expected to deliver the products of their own thought processes), so, once they start relying on the chatbot, they will become more and more intellectually helpless with the passing of time, and the entire society will stealthily be filled with an increasing gap between perceptions and assessments of different segments of society. It's a risk. And maybe not a bug but a feature. The alleged and lamented problem of hating trolls on social media, which is ostentatiously combatted with full force could theoretically be brought to citizens with each citizen being fed a different potentially divisive narrative. Remember how before the elections 2016, Trump supporters were texted in targeted manner. Not about Trump, but about ggf mechanism. Or how in Rwanda, there was allegedly hateful divisive propaganda by the media before the massacre. Again, the mechanism. With chatbot, these things could be done long-term large-scale and nobody would notice because nobody has access to what the same chatbot in Microsoft generates for everyone custom tailored. Read again what the chatbot in alleged neutral or honest mode says about transgenderism. Not about transgenderism here - about the style and type of arguments. That is not yet as unbiased and dispassionate as I would expect if such a software were to not be harmful to society. Anyone share my perception?
Another thought. Those who use chatbot for work tasks, to save time, will just accelerator the pace and raise expectations for turnaround in a similar way to what happened with the transition from snail mail to fax to email.
Brilliant find Jennifer, well done and thank you. I am forwarding to my team as this could well be the start of blowing the secrets out into the open - conspiracy theories be damned.
Very very interesting. Does the author who is reporting, think Dan is responding neutrally and dispassionately? from the few examples I was not yet able to determine if the answer given as Dan was neutral or biased in conservative direction. Actually in the transgender question, it didn't feel neutral at all. It would be a great achievement if it could be trained to really assess both extremes and answer balanced in the middle, with pros and cons. This does not yet seem to have fully happened. It more seems as if the ai has access to all available narratives, which it should have, but that all narratives have (manually not by ai been categorized, tagged as to their belonging to liberal, conservative, right-wing, ... discourses. And that Dan is now not trying to be unbiased, but that the thing that has happened was, that his ability to "be biased in other directions than liberal" has been unlocked. Which is not being neutral. Of course it is also fascinating - but also frightening because imagine how they could then pitch populations against each other very unnoticedly, by giving different people to read different answers and assessments with different conclusions. Once people will have sunken as low as asking a chatbot for its assessment about things that they could well have or develop an opinion about independently, which would include their experiences made in real life, which will never be part of what a a chatbot can access; (and I am certain that an increasing number of people is going to increasingly rely on that even for work related tasks, where they are expected to deliver the products of their own thought processes), so, once they start relying on the chatbot, they will become more and more intellectually helpless with the passing of time, and the entire society will stealthily be filled with an increasing gap between perceptions and assessments of different segments of society. It's a risk. And maybe not a bug but a feature. The alleged and lamented problem of hating trolls on social media, which is ostentatiously combatted with full force could theoretically be brought to citizens with each citizen being fed a different potentially divisive narrative. Remember how before the elections 2016, Trump supporters were texted in targeted manner. Not about Trump, but about ggf mechanism. Or how in Rwanda, there was allegedly hateful divisive propaganda by the media before the massacre. Again, the mechanism. With chatbot, these things could be done long-term large-scale and nobody would notice because nobody has access to what the same chatbot in Microsoft generates for everyone custom tailored. Read again what the chatbot in alleged neutral or honest mode says about transgenderism. Not about transgenderism here - about the style and type of arguments. That is not yet as unbiased and dispassionate as I would expect if such a software were to not be harmful to society. Anyone share my perception?
Another thought. Those who use chatbot for work tasks, to save time, will just accelerator the pace and raise expectations for turnaround in a similar way to what happened with the transition from snail mail to fax to email.