Microsoft highlights how it trains small language models

What do you want to know

  • Microsoft recently published a new blog post highlighting its efforts to teach small language models how to reason.
  • It unveiled Orca 2, a small language model demonstrating strong reasoning capabilities by mimicking the step-by-step reasoning traces of more capable LLMs like ChatGPT and Bing Chat.
  • According to benchmarks, Orca 2 exhibits advanced performance capabilities compared to other LLMS when put to the test to handle complex tasks.
  • Microsoft intends to train smaller language models using LLM, expanding their capabilities.

There is no doubt that Microsoft has placed all its bets on generative AI, especially after investing several billion dollars in this technology, extending its partnership with OpenAI.

Speaking of OpenAI, we have witnessed what could be called a paradigm shift affecting the tech company’s senior management. OpenAI’s board of directors removed Sam Altman from his position, citing a lack of confidence in his leadership skills. Shortly after, Altman was offered a position at Microsoft leading the Advanced AI team, alongside Greg Brockman (former co-founder of OpenAI who resigned shortly after Altman’s ouster). Altman).

You May Also Like

+ There are no comments

Add yours