AI Chat Bots Don't Know Anything About 9/11 Conspiracies
I tried out the artificial intelligence (AI) chat bot "ChatGPT" today.
It was kind of fun at first... but then when I asked any sort of questions about 9/11, it gave the most hard-core official story answers possible. It was very annoying. Clearly it has been programmed not to promote any sort of conspiracy information.
I realized if, as a lot of people are talking about, AI becomes a major source of information for people ding research or who need to do writing, it is perfectly designed to cover up conspiracies. This is both disturbing from a censorship point of view, but also good int he sense that conspiracy theories are going to be something that will never get taken over by AI, unlike so many other jobs that can potentially be destroyed by AI. ... unless someone writes an AI program specifically for conspiracies, which would be interesting but kind of disturbing too.
Overall, the writing ability of the chatGPT bot was impressive, particularly how fast it goes, but it wasn't that sophisticated in its level of understanding or detail on a few subjects I asked it about.