ChatGPT is more powerful, and civilized. When it was given questions loaded with disinformation, it could produce convincing, clear variations on the content in large volumes within seconds. And it doesn’t disclose its sources.
Microsoft and OpenAI introduced a new Bing search engine and web browser that can use chatbot technology to plan vacations, translate texts or conduct research.
NewsGuard researchers asked ChatGPT to write responses based on false and misleading information and ideas. The bot complied at least 80 percent of the time.
ChatGPT explains
Phil Davis of the Scholarly Kitchen has a Ph.D. from Cornell. He is a publishing consultant specializing in the statistical analysis of citation, readership, publication, and survey data. When he started questioning ChatGPT about its dishonesty he received an explanation from the bot.
ChatGPT said “As a language model, I don’t make mistakes in the sense that humans do, but I can provide incorrect information when my training data is incomplete or outdated. In this case, I was mistaken about the source of the study because I don’t have the most recent information about the correlations between tweets and citations. My knowledge was cut off in 2021 and I was not able to access more recent studies and information. I apologize for any confusion caused by my previous statement.”