Coda AI Chat Internet and/or Plugin Access

Coda AI Chat is a terrific advance - thank you!

One huge suggestion - It would be immensely useful if an AI Chat query could retrieve information from the web, i.e. “List 5 top news stories from today” or “What are reviews of XYZ Hotel?” The ability to then insert that data into a Coda doc would be stunning.

I realize that the basic OpenAI API does not allow Internet access. But there are plug-ins to ChatGPT which add Internet capability. Is there any way similar capability could be added to Coda AI Chat?

hi @Richard_Kaplan ,

you can already, see screenshot. The option is not promoted in the top bar, I don’t know why.

Thank you @Christiaan_Huizer

Even when I set the context to General it only gives responses from ChatGPT’s static database - not from the web. Why do you seem to get a different result?

hi @Richard_Kaplan ,

I don’t know what is running in the back ground, several LLM’s as I understood and they should help you out.

I asked in Dutch why houses have sloping roofs and next to have this answer translated in English. That works very well.

AI is not real time like BING or Bard, so you cannot ask for recent news, try something else and I am sure it will help you out.

I apply this logic

  • Role & Goal
  • Ask a task
  • Format

see also my blog in on the matter:

Thank you @Christiaan_Huizer

Asking about sloping roofs and translating between languages are all part of the trained ChatGPT model

It is not accessing the Internet at present.

The question is - is there a way to access ChatGPT plugins through Coda AI? If that were possible then we could use the Internet plugin (among many others). It would immensely add capability.

1 Like

hi @Richard_Kaplan ,

It is not accessing the Internet at present.

That is correct, the AI does not do that.

I am not aware of the options you suggest, maybe @Katy_Turner can shine her light on the matter.

Cheers, Christiaan

No.

Correct. This is why the systems I build (in Coda) do not use Coda AI; they use PaLM 2 (Bard) directly which by default include live Internt access without additional plugins or cost. This is also possible with other open-source LLMs.

1 Like

That sounds quite interesting/ promising - how do I do that? Can you point me to a post or other reference on how I can set that up?

1 Like

Can’t wait to hear more about this…

These are the sixteen places where I’ve mentioned use of PaLM 2 with Coda.

I create Packs that integrate Coda with PaLM 2. Ideally, Coda will eventually allow us to simply specify alternate LLMs, but for now, we must use packs or embed plugins to bring the advantages of other LLMs into focus.

The AI requirements determine how I build either an embed plugin or a Pack to meet the inferencing needs.

One of the challenges that I must undertake when I do this is to convey table data and content to PaLM 2. Coda AI makes this seamless with OpenAI LLMs, a clear advantage that you don’t have to think about. However, other advantages exist with PaLM 2 such as it’s multi-model features - you can get images in your results. You can also use images in your prompts. The prompt window, however, is on par with GPT-4 (just 8k tokens).

This pathway is not a bed of roses. It is rife with complicated requirements and custom development. But, it is also 2.7 times faster that GPT models and more reliable.

Thank you @Bill_French

Might you be able to share an actual document and Pack where you have implemented this? [If you choose to make it a paid Pack that would be reasonable given the effort you have put into this.]

I suspect I could take your example and tweak it to my needs as I have written some Packs already for custom projects of mine. But I suspect the work to duplicate what you have done would go beyond what I could take on.

Yep. Here’s a simple formula (inference) that calls the PaLM 2 API.

/*

   ***********************************************************
   PALM2 EXAMPLE
   Copyright (c) 2023 by Global Technologies Corporation
   ALL RIGHTS RESERVED ...
   ***********************************************************

*/

// import the packs sdk
import * as coda from "@codahq/packs-sdk";

// create the pack
export const pack = coda.newPack();

pack.setUserAuthentication({
  // type: coda.AuthenticationType.HeaderBearerToken,
  type: coda.AuthenticationType.QueryParamToken,
  paramName: "key",
  instructionsUrl: 'https://makersuite.google.com/app/apikey',
});

// set the network domain
pack.addNetworkDomain('generativelanguage.googleapis.com');
var cPaLMEndpoint = "https://generativelanguage.googleapis.com/v1beta2/models/";
var oResponse;

// ################################################################
//
// inference
//
// ################################################################
pack.addFormula({

  resultType : coda.ValueType.String,
  // codaType: coda.ValueHintType.Markdown,

  name: "inference",
  description: "Text completion...",
  cacheTtlSecs: 0,

  parameters: [

    coda.makeParameter({
      type: coda.ParameterType.String,
      name: "xPrompt",
      description: "The text to verify for factuality.",
    }),

    coda.makeParameter({
      type: coda.ParameterType.String,
      name: "xLearnerShot",
      description: "An optional learner shot.",
      optional: true
    }),

  ],

  // execute the formula
  execute: async function ([xPrompt, xLearnerShot, xDateTime], context) {

  console.log("xPrompt: " + xPrompt);

  var examples = (xLearnerShot) ? "Examples:\n" + xLearnerShot : "";

    var prompt = `
      ${xPrompt}

      ${examples}
      
      [output]
    `;

    var tokens = 32;

    let thisPromise = await palmCreateTextCompletion(prompt, "", "text-bison-001", 1.0, tokens, 3, context)
      .then(json => {
        oResponse = json;
    });

    var response = "";

    if (oResponse.body.candidates) {

      try {
        response = await oResponse.body.candidates;
        console.log(JSON.stringify(response));
      } catch (e) {
        console.log("Failure!");
        console.log(e.message);
        response = await JSON.stringify(oResponse);
      }
    }

    return(response);

  }

});

//
// TEXT COMPLETION
//
async function palmCreateTextCompletion(textPrompt, textContext, textModel, textTemp, textTokens, candidateCount, context)
{

  var url = cPaLMEndpoint + textModel + ":generateText";

  console.log('url: ' + url);

  var body = {
    "prompt": {
      "text" : textPrompt
    },
    "temperature": textTemp,
    "candidate_count": candidateCount,
    "max_output_tokens" : textTokens,
    "top_k" : 4,
    'safety_settings' : [
      {
        "category": "HARM_CATEGORY_DEROGATORY",
        "threshold": "BLOCK_NONE"
      },
      {
        "category": "HARM_CATEGORY_TOXICITY",
        "threshold": "BLOCK_NONE"
      },
      {
        "category": "HARM_CATEGORY_VIOLENCE",
        "threshold": "BLOCK_NONE"
      },
      {
        "category": "HARM_CATEGORY_SEXUAL",
        "threshold": "BLOCK_NONE"
      },
      {
        "category": "HARM_CATEGORY_MEDICAL",
        "threshold": "BLOCK_NONE"
      },
      {
        "category": "HARM_CATEGORY_DANGEROUS",
        "threshold": "BLOCK_NONE"
      }
    ]
  }
  console.log(JSON.stringify(body));

  const response = await context.fetcher.fetch( {
    url     : url,
    method  : 'POST',
    headers: {
      'Content-Type' : 'application/json'
    },
    'body' : JSON.stringify(body)
  });

  return(response);
}

2 Likes

Outstanding - huge thanks @Bill_French - I will give this a try

1 Like

Newly relevant…