What do you want to know
- Microsoft recently published a new blog post highlighting its efforts to teach small language models how to reason.
- It unveiled Orca 2, a small language model demonstrating strong reasoning capabilities by mimicking the step-by-step reasoning traces of more capable LLMs like ChatGPT and Bing Chat.
- According to benchmarks, Orca 2 exhibits advanced performance capabilities compared to other LLMS when put to the test to handle complex tasks.
- Microsoft intends to train smaller language models using LLM, expanding their capabilities.
There is no doubt that Microsoft has placed all its bets on generative AI, especially after investing several billion dollars in this technology, extending its partnership with OpenAI.
Speaking of OpenAI, we have witnessed what could be called a paradigm shift affecting the tech company’s senior management. OpenAI’s board of directors removed Sam Altman from his position, citing a lack of confidence in his leadership skills. Shortly after, Altman was offered a position at Microsoft leading the Advanced AI team, alongside Greg Brockman (former co-founder of OpenAI who resigned shortly after Altman’s ouster). Altman).
While all this is happening, Microsoft has released a new blog post highlighting its efforts to teach small language models how to reason. A few months ago, the company debuted Orca. “A 13 billion language model that demonstrated strong reasoning abilities by mimicking the step-by-step reasoning traces of higher-performing LLMs.”
And now it has unveiled Orca 2 (which comes in two sizes – 7 billion and 13 billion parameters) as part of its efforts to harness the capabilities of smaller LMs. According to Microsoft, Orca 2 offers “enhanced cues and training methods that can enable smaller language models to acquire enhanced reasoning abilities.” This is a significant feat given that these features are often found on larger language models, including ChatGPT and Bing Chat.
Certainly, both chatbots have faced numerous setbacks throughout this year, with several users citing that ChatGPT is becoming increasingly stupid while OpenAI is on the verge of bankruptcy. On the other hand, a report states that Bing’s user base has stagnated for three consecutive months, despite Microsoft’s massive investment in the technology.
Microsoft further cites that Orca 2 is a step ahead of other similar models, even the original Orca model. Additionally, the company said it shows advanced levels of performance compared to other larger models when handling complex tasks that test “advanced reasoning abilities in zero-shot settings.”
Microsoft’s Advanced AI team is on the right track
Amid the OpenAI fiasco this weekend, Microsoft CEO Satya Nadella announced that Sam Altman would join the company as head of the Advanced AI team. A job that matches their abilities and skills.
Following Altman’s unfortunate news, more than 500 OpenAI staff members wrote a letter to the board requesting his reinstatement, saying the decision undermined their vision. The employees indicated that they would leave the company if their demands were not met, further stating that “there is no OpenAI without its people.”
According to sources familiar with the matter, Microsoft is ready to integrate all OpenAI employees into the AI division if they decide to keep their promise and leave the company.
Microsoft is likely to leverage the OpenAI team to do more with Orca 2. As a result, this will allow the company to use LLMs to train smaller language models, expanding the model capabilities of smaller language.
Do you think Microsoft is on the right track with its Orca 2 plans? Share your thoughts with us in the comments.