From Search Engine and Content Generator to Work Partner and Confidant
ChatGPT celebrates its third anniversary this Sunday. In that time, it has evolved from content generation to multimodal interaction and deep reasoning, offering users a true work partner and confidant.
OpenAI’s chatbot now boasts 800 million users who rely on it for more than just a search engine. They also use it as a work partner, a personal advisor, or a confidant, thanks to the conversational interaction format that has been a hallmark of the platform since its inception.
However, it is the evolution of its capabilities that has transformed it from an entertainment tool focused on content generation to one more centered on solving problems related to work, studies, and everyday life.
This evolution is driven by the large language models at its core. When ChatGPT first appeared, it launched with version GPT3.5, which allowed it to answer questions and maintain a realistic conversation with a user, as well as generate content from a text description.
With GPT-4, multimodal capabilities were introduced with image and speech understanding, taking interaction to new heights by moving beyond solely text. However, it was with GPT-4o that multimodality became native, enabling it to understand and generate a combination of text, audio, and image inputs with great speed.
The company then focused on improving reasoning capabilities with models exclusively dedicated to this area, known as the o-series. This allowed ChatGPT to improve its responses by dedicating more time to thinking about them, enabling it to solve complex tasks and problems in areas such as science, programming, and mathematics.
And it was able to take its first steps in autonomous internet navigation with the agent capabilities of OpenAI’s o3 and o4-mini, two models that, according to the company, were “trained to reason about when and how to use tools to produce detailed and thoughtful responses in the correct output formats,” and to do so quickly, in less than a minute.
ChatGPT currently stands out as an expert in any PhD-level field, thanks to the reasoning advancements brought by GPT-5 this summer, and its ability to generate a complete program autonomously and quickly from just a few prompts.
The refinement introduced by the model extends to its ability to communicate with users, which is more natural and even adapted to different personalities.
THE PROS AND CONS OF CHATBOT
Interactions with ChatGPT can produce what are known as hallucinations and deceptions; that is, responses in which information is fabricated or used incorrectly.
This isn’t actually a problem unique to ChatGPT; it also appears in other chatbots like Gemini, Claude, Perplexity, and Grok. But the main issue lies in the fact that these chatbots are considered a reliable source of information, and the resulting delusions can lead people to accept false information as fact without verifying it first.
To mitigate this, OpenAI, like the companies responsible for the other chatbots, typically strengthens the training of their base models and configures them to display links indicating the sources of their information and clearly state when they cannot successfully complete a request or task, and even refuse to answer questions when they pose a risk.
This last point connects to another problem that has arisen with the use of ChatGPT: the exacerbation of mental health issues due to the emotional dependence that has developed in some people who use the chatbot as if it were a friend or confidant, fostered by the conversational interaction and natural language.
OpenAI claims that GPT-5 better identifies signals in conversations that indicate mental health symptoms and emotional distress, and reduces the occurrence of unwanted responses—which worsen the situation—while maintaining safeguards even in long conversations, where they typically fail, and providing resources for seeking help.
While improving its response to mental health issues, it has also made changes that restrict teenagers’ use of ChatGPT. These changes include introducing tools for parents and guardians to monitor their interactions and limit the appearance of certain content. OpenAI is also developing an age detection system to tailor the user experience.
These changes came after an American family sued the tech company over their teenage son’s suicide, arguing that ChatGPT played a significant role in his decision due to a failure in safeguards.
These problems arose with more continuous and in-depth use of ChatGPT, but from the outset, given its content-generating capabilities, copyright issues were prominent. These concerns stemmed from the sheer volume of data (text, video, images, audio) required to train the models, which is not always sourced from free and open-source materials.
Finally, cybersecurity must not be overlooked. Although OpenAI and other companies developing these models claim to implement limitations and barriers, one of the first applications detected with ChatGPT was the generation of malware. Since then, cybercriminals have refined its use to launch malicious campaigns, generate biased content, and monitor social media conversations.
The misuse of this ‘chatbot’ extends to the generation of videos and photographs that simulate real situations, or recordings that realistically reproduce the voice of public figures, known as ‘deepfake’, which are used for the promotion of fake news, disinformation or propaganda and pornographic manipulations, and financial frauds.
