O1 Preview
Last week, OpenAI presented a new model that made everybody think twice. It is of imperative importance to know what this model is.
They launched a model called o1-preview, this model is a part of new families of models, and it’s called a reasoning model. Meaning that it can think and take decisions consciously.
This model takes longer to answer, because it needs time to think about the question.
The time it uses though, is spent well. According to OpenAI, the new model solves 83% of the problems from the International Mathematics Olympiad, compared to only 13% that GPT4o could.
This result alone is worth standing ovations 👏. It is an incredible result, it is in fact a giant leap forward, from 13 to 83 that’s a 638% increase in logical thinking. It’s like, until now you were using a magic ball that could answer yes and no 🎱, and now you got upgraded to google search.
The leap is incredible indeed, and hence the reason why this is probably the closest we’ve been to AGI until now.
You’ve all used AI products, the results are sometimes dumb, sometimes hallucinating, sometimes going into absolutely the wrong direction, now though, this should change. The accuracy should be way better, and it should behave less like a robot and more like a human.
Given the fact that we’ll slowly increase the memory footprint of the AI, it will eventually have a great picture of every human it communicates to, then we reach the singularity of HER. Where we need no girlfriends, we find ourselves deep in the AI embracement.
So now to the good part, you can use it already today, and try it out in chat gpt, with a limit of 50 generations per week. I could also add it to texti.app, let me know if you’d like to try it out.
I did try out one example OpenAI showed on stage, and that is mainly the guessing algorithm of an LLM, where I want a visual representation of how likely a word is to be followed by another based on the LLM logic.
You can try the example on the FAQ page of Texti News.
A couple downsides worth mentioning about the model are:
It’s clearly slower, because it takes way more time to process.
This model is significantly more expensive, to process, to run, to serve. Albeit this is for now only. Eventually it’s going to be cheaper as did our simple GPT models.
This model cannot currently use web search, or provide you data augmented responses, only what it knows of things it can figure out by itself.
Generally speaking this model currently is for people who want to take out really advanced stuff, for the rest the GPT4o is good enough. But if you want to cheat at your maths test, you can try it out. 😉
Finally I want to mention, we’re closer to Artificial General Intelligence yet again! This makes me cheer and become happy! I know we have risks in place, but progress is still the thing that makes me cheer up every time, because it reminds me. I live in the future. What I thought was impossible just a couple of months ago, is now a reality!