Open AI took too long to respond...why don´t wait a bit more?

Couldn’t agree more.

At my company we have moved entirely to PaLM 2 and its various models over the past few weeks. We have essentially jettisoned OpenAI for any future development and we’re working toward full adoption of Google’s models or open-source models going forward.

I have successfully integrated embeddings, chat completions, and text completions using Gecko (the tiny model) and Bison (the moderately large LLM) with really good results. It’s beta so there are still some issues but nothing that we can’t control pretty well. Using these inferencing APIs is very easy in Packs.

Performance across the board is 3x that of OpenAI, and it’s consistent. We don’t see periods of sluggishness like we did on OpenAI’s models.

The cost? There isn’t any cost, so we’re taking advantage of that and performing more inferencing which allows us to do things that were financially impractical with OpenAI. Elon was right - AI costs will soon be less than the cost of accounting for it. It wan’t to be free and it seems this has happened a lot sooner than anyone imagined.

Is PaLM 2 perfect? Hardly. But no LLM is. In my view, we know that we need AGI to be competitive, productive, and leaders in our business segment. We also realize that we need to be “platformative” concerning LLM and API implementation choices.

I would gladly build an AI Pack, but there’s no incentive to do so. Coda will likely produce one. Competing with the vendor in an aftermarket is risky. :wink: