Welcome ChatGPT 5!

Ladies and gentlemen, the curtain has risen and the star of every developer’s coffee machine conversation has arrived: ChatGPT-5!

In a roughly hour-long livestream hosted by Sam Altman in a studio that vaguely resembled an Ikea showroom, another piece of history was written—another step toward an inevitable future that’s as fascinating as it is dangerous.

In this article, I won’t dive into every detail, but I’d like to give us a general overview of what emerged from the livestream.

Improvements over previous models

The first part of the presentation focused on improvements over previous OpenAI models, with several comparison charts shown. Generally, they showed faster response times, but the most interesting aspect concerns hallucinations—the charts showed truly significant improvements compared to other benchmark models (o3 / gpt-4o).

We’ll certainly learn more in the coming days when ChatGPT-5 will enter the benchmark arena.

Development

Development probably deserves a dedicated article, but I’ll try to be concise 😀

Several examples of live-coding/vibe-coding were shown, all successful and far from trivial. As a developer, I wouldn’t create anything production-ready with vibe coding, simply because a product created from my model’s “vibes” at any given moment could accumulate technical debt comparable to America’s GDP. That said, I think it’s an excellent tool for creating quick mockups and getting inspiration for potential real implementations.

The demos showed that the same prompt in different browser tabs produced different web apps (quite sophisticated ones, too)—worth keeping in mind if we’re short on inspiration and want several starter templates to choose from.

App prompt 1
App prompt 2
3D game generated by ChatGPT
Dashboard generated by ChatGPT

Voice

The agent’s voice responses have improved. The demo simulated a conversation aimed at learning to speak Korean. The naturalness of responses reached an excellent level (it reminded me a lot of the sesame project), and switching between different languages happened very quickly.

The demo also showed requesting the assistant to slow down or speed up its speech.

Currently, I use Claude as my model, but one of its major drawbacks is that it can’t respond with voice—a definitely interesting aspect that’s now present in both Gemini and here.

Personalization

Progress has been made in making GPT-5 more “personal.” The most important thing of all is obviously the ability to change the chat color with our assistant 😅

As all developers know, this will probably be the most impactful and impressive change for the average user’s life 🙃

Joking aside about colors, Gmail and Google Calendar integration has been added, allowing GPT-5 to read our appointments and plan our day (Pro accounts only).

Finally, they also mentioned the ability to change personality, making responses closer to our communication style.

Study companion

As previewed in the voice section, this model’s abilities to support us during study have been improved: whether it’s learning a foreign language or understanding NgRx architecture, GPT-5 will always be available to answer your questions patiently.

In one of the event’s first talks, they asked it to explain the Bernoulli effect (which enables airplanes to fly), and not satisfied with that, they asked it to create an SVG representation to better understand the concept. ChatGPT created an application in 2 minutes where you could visualize a wing cross-section and see the effect of changing the angle of attack. Absolutely incredible when you think about how simple it now is to understand a complex concept. I would have loved to have ChatGPT during high school physics classes 😥

Health focus

Another heavily emphasized aspect concerns AI’s role in healthcare. In life, I spend part of my spare time volunteering as an ambulance technician, and this aspect sparks an infinite series of applications for me.

During the livestream, a woman who had to face cancer, spoke about how ChatGPT-5 helped her understand the context, diagnoses, reports, and medical records to then make appropriate decisions based on the provided responses. On one hand, this aspect fascinates me—I envision a future where each of us will have a medical “first level” always with us, or where first responders’ efficiency will be “amplified” by the support of a medical AI always available. On the other hand, I see possible “dangers”: will we trust our assistant more than what the doctor tells us?

Today—especially adolescents—use AI models as a psychologist and support for making important decisions, and in a few years, when they’ll be older, they’ll have learned to rely more and more on these tools for decision-making.

Security and training

The “neural” processes in GPT-5’s black box have been improved, and it evaluates case by case whether a response could be dangerous or not. If we ask a question that could put us in danger (or could harm others), the model will NOT respond and will explain why.

The event showed how, faced with a dangerous question, the o3 model responded while the new model didn’t respond and explained why.

Response provided by o3 model
ChatGPT-5 doesn’t respond and explains why

Models

This part of the presentation reminded me a lot of Google I/O from a few months ago, where Google had also presented several almost pocket-sized models.

The unveiled models are:

  • GPT-5-chat – designed for natural conversations, customer service, voice interfaces
  • GPT-5-mini – lighter, optimized for daily use and less expensive
  • GPT-5-nano – designed for local or mobile applications, with minimal latency

        Let’s not forget that a few days ago, open-source models were presented—GPT-OSS. If you missed the news, you can read my article about it.

        Pricing

        IThe new model is already available from today, August 7th, both free and on paid accounts (with different usage limits). For pricing, I’ll refer you to the official page,

        Conclusions

        We live in an era of epochal and radical changes, and every event of this type is a small piece of history being written. All of us were born in an era where computers are considered “normal,” something that was pure science fiction for people 100 years ago.

        Whether we like it or not, new generations will grow up alongside AI, and this aspect will increasingly enter our lives. In the early days, computers were only for a few; today everyone has one. Tomorrow, probably each of us will have our own dedicated and personal AI available, always ready, always informed: if tomorrow all this will become normal, we owe it to many small steps, and one of these is certainly the event we witnessed today.

        Every advance in this field is a step toward tomorrow’s normalcy, and many of us will be the developers who made it possible.

        Share this article
        Shareable URL
        Prev Post

        OpenAI’s new “open” models

        Read next

        ChatGPT-5 is coming!

        Back in February, Sam Altman hinted that the new model might drop this summer… and here we are!…