Makers of ChatGTP announces new AI that can think and reason

Introduction

The revolution in artificial intelligence keeps on getting more dynamic each day. While we see the use of AI in more fields and sectors than originally envisioned, we also welcome more refinements of various AI models. Just few days ago, OpenAI introduced a new family of AI models that can think like a human when trying to solve a problem. They called these new family of advanced models OpenAI o1. They way they are programed to solve problems is a little different from previous versions.

The new family of Open AI o1 are designed to for specialized fields where the process that generated the result are as important as the result itself. There should be a presentation of steps that brought the model to the final result. Fields like programming would benefit from this new family models because the need to show how the final outcome was arrived at.

Open AI o1 looks then leaps

When you use chatGTP or other earlier AI models, you discover that they simply give answers to your questions. They are designed to respond to prompts and give a final result without showing how the answer was arrived at. The whole process that was passed through before a response was made is usually kept behind the scene. For many queries, that would not be a problem. But there are fields that require presenting each step that led to the final result. There comes the need for AI that can think almost like a human teacher.

Thus, Open AI o1family of language models are built to tackle more complex situations like the ones found in programming and mathematics. A member od this new family will specialize on queries from Science, Technology, Engineering or Mathematics. These family series should be able to solve a complex programming question that has branches. Each step that let to a particular conclusion should be presented with explanations of how and why it formed part of the thought process

Even the most recent release from the OpenAI family GPT-4o finds it easy to get past complex mathematics questions. Worse yet, it finds it hard to present a logical thought process that led to the results presented. So where GPT-4o hits a brick wall and couldnt go beyond it, models from Open AI o1 are able to excel beyond that point, while also showing steps that got it to the final answer.

Building a model that thinks

Open AI has gone through lots of testing and modifications to arrive at this family of thinking AI. They have also placed it under severe testing both with humans and existing models to determine how this new model solves complex problems. Open AI has just focused on specialized fields of STEM. So the new model was tested in these fields to ascertain its effectiveness.

Open AI o1 was asked to solve science questions that was also given to top people in the industry. The model performed outstandingly well against human competitors. The test was also included a mathematics competition with an early model precisely GPT-4o. Open AI o1 performed very well against that model and recorded more than 70% efficiency or success in solving a mathematics question. It shows the value that this family of AI models will bring in the science field.

There are limitations that come with this model though and that is intended to be like that for now. Open AI o1 unlike earlier models will not be able to look for answers on the internet. This was done to allow it to specialize on science data sets that it was trained to process. It is supposed to focus on logical presentation of complex questions in the science field whose answers are not easily found online. This is the progress that was made with this new family of models.

Are the new models safe?

In recent times, the safety of AI platforms have been a subject of much controversy. In some countries and cities, the authorities have made laws to hold AI developers liable if their platform or product causes harm to the user. So the question of whether the current model is safe is really an important one.

Open AI, the developers of this new model have promised that the new models are safe. They have have subjected it to rigorous performance and safety tests to ensure that nothing goes wrong for users. Here isba sample of the tests the model passed through:

One way we measure safety is by testing how well our model continues to follow its safety rules if a user tries to bypass them (known as "jailbreaking"). On one of our hardest jailbreaking tests, GPT-4o scored 22 (on a scale of 0-100) while our o1-preview model scored 84. You can read more about this in the system card and our research post. source

Conclusion

Its really interesting to welcome these new models and the advancement they bring to the AI space. If things go as planned, they will bring massive value to the STEM field and big plus to all that use it.

If you want to test the brand new Open AI o1, click here


Thumbnail is from pixabay

Posted Using InLeo Alpha



0
0
0.000
0 comments