OpenAI has launched its latest artificial intelligence model, the o1 series, which the company claims possesses human-like reasoning capabilities.
In a recent blog post, the maker of ChatGPT explained that the new model spends more time thinking before responding to queries, enabling it to tackle complex tasks and solve harder problems in areas like science, coding, and mathematics.
The o1 series is designed to simulate a more deliberate thinking process, refining its strategies and recognising mistakes much like a human would. Mira Murati, OpenAI’s Chief Technology Officer, described the new model as a significant leap in AI capabilities, predicting it will fundamentally change how people interact with these systems. “We’ll see a deeper form of collaboration with technology, akin to a back-and-forth conversation that assists reasoning,” Murati said.
While existing AI models are known for fast, intuitive responses, the o1 series introduces a slower, more thoughtful approach to reasoning, resembling human cognitive processes. Murati expects the model to drive advancements in fields such as science, healthcare, and education, where it can assist in exploring complex ethical and philosophical dilemmas, as well as abstract reasoning.
Mark Chen, Vice-President of Research at OpenAI, noted that early tests by coders, economists, hospital researchers, and quantum physicists demonstrated that the o1 series performs better at problem-solving than previous AI models. According to Chen, an economics professor remarked that the model could solve a PhD-level exam question “probably better than any of the students.”
However, the new model does have limitations: its knowledge base only extends up to October 2023, and it currently lacks the ability to browse the web or upload files and images.
The launch comes amid reports that OpenAI is in talks to raise $6.5 billion at a staggering $150 billion valuation, with potential backing from major players like Apple, Nvidia, and Microsoft, according to Bloomberg News. This valuation would place OpenAI well ahead of its competitors, including Anthropic, recently valued at $18 billion, and Elon Musk’s xAI at $24 billion.
The rapid development of advanced generative AI has raised safety concerns among governments and technologists regarding the broader societal implications. OpenAI itself has faced internal criticism for prioritising commercial interests over its original mission to develop AI for the benefit of humanity. Last year, CEO Sam Altman was temporarily ousted by the board over concerns that the company was drifting away from its founding goals, an event internally referred to as “the blip.”
Furthermore, several safety executives, including Jan Leike, have left the company, citing a shift in focus from safety to commercialisation. Leike warned that “building smarter-than-human machines is an inherently dangerous endeavour,” and expressed concern that safety culture at OpenAI had been sidelined.
In response to these criticisms, OpenAI announced a new safety training approach for the o1 series, leveraging its enhanced reasoning capabilities to ensure adherence to safety and alignment guidelines. The company has also formalised agreements with AI safety institutes in the US and UK, granting them early access to research versions of the model to bolster collaborative efforts in safeguarding AI development.
As OpenAI pushes forward with its latest innovations, the company aims to balance the pursuit of technological advancement with a renewed commitment to safety and ethical considerations in AI deployment.