Amazon “is investing millions in training an ambitious large language model,” reports Reuters, “hoping it could rival top models from OpenAI and Alphabet, two people familiar with the matter told Reuters.”
The model, codenamed as “Olympus”, has 2 trillion parameters, the people said, which could make it one of the largest models being trained. OpenAI’s GPT-4 model, one of the best models available, is reported to have one trillion parameters…
The team is spearheaded by Rohit Prasad, former head of Alexa, who now reports directly to CEO Andy Jass… Amazon believes having homegrown models could make its offerings more attractive on AWS, where enterprise clients want to access top-performing models, the people familiar with the matter said, adding there is no specific timeline for releasing the new model.
“While the parameter count doesn’t automatically mean Olympus will outperform GPT-4, it’s probably a good bet that it will, at minimum, be very competitive with its rival from OpenAI,” argues a financial writer at the Motley Fool — as well as Googles nascent AI projects.
Amazon could have a key advantage over its competition, one that CEO Andy Jassy alluded to in the company’s third-quarter earnings call. Jassy said, “Customers want to bring the models to their data, not the other way around. And much of that data resides in AWS [Amazon Web Services] as the clear market segment leader in cloud infrastructure….”
Amazon will likely also leverage Olympus in other ways. For example, the company could make its CodeWhisperer generative AI coding companion more powerful. Jassy noted in the Q3 call that all of Amazon’s “significant businesses are working on generative AI applications to transform their customer experiences.” Olympus could make those initiatives even more transformative.
They point out that Amazon’s profits more than tripled in the third quarter of 2023 from where they were in 2022.
And Amazon’s stock price has already jumped more than 40% in 2023.