The main limitation for the current ChatGPT version is that you have to limit yourself to questions as succinctly as possible, otherwise there is a risk that the AI will unpredictably veer off topic, with results that surprise and frighten in equal measure.
Described as hallucinations by Microsoft engineers, responses that seem to degenerate into overly personal approaches and even threats should be a thing of the past with ChatGPT-4, the new AI model designed from the ground up to facilitate conversations. In fact, the AI will be able to “remember” each person it interacts with and develop a sort of friendship based on their user profile. Ideally, advertising companies using ChatGPT-4 services will be able to use psychology tactics to sell ads, products, or customer loyalty.
According to OpenAI, ChatGPT-4 can process up to 25,000 words in a single pass, up to eight times more than the current Chat GPT. In other words, if with the current version you can generate essays on a wide variety of topics, with GPT-4 you can generate the entire license paper.
Popular requests for ChatGPT include writing songs, poems, marketing materials, executable code for software applications, and theme resolution. Surprisingly, the overly elaborate wording most often raises alerts about the authenticity of the creations, not the information itself.
ChatGPT-4 can answer questions using natural human-like language and can even mimic other writing styles, such as composers and authors of certain works. Instead, the AI uses the internet as a source of information, only unlike human searches, results are found and selected within seconds. At the moment, the main limitation is that the internet is not accessed directly, but a huge database compiled in the year 2021, serving as a store of knowledge. In other words, ChatGPT cannot respond to queries that cover topical subjects such as the daily news.
The OpenAI developer claims to have spent six months refining safety features for GPT-4 and training the AI on human feedback. However, the developer warns that situations could still arise where the generated responses contain misinformation, or degenerate into contradictory discussions.