🤖 AI and LLM APIs 🤖

Agregore enables P2P web apps to access user configurable Large Language Model APIS that are modelled after the text completion features in OpenAI.

Unlike other browsers that provide a chat interface, that accesses the browser, we leave it up to web apps and extensions to do whatever they want while keeping control over the model to the user.

As well, instead of onboarding you onto an expensive and environment killing cloud based LLM, we default to using a local ollama install and a default 3B model that can be run locally on most consumer hardware. These models are a bit less effective at complex tasks but they take orders of magnitude less power, work fully offline (after initial setup) and keep all your conversations private.

Setting up ollama

Before you can run local models you will want to set up ollama on your computer. In the future we may integrate it directly into Agregore, if you want this feature, please open an issue on our Github repository.

curl -fsSL https://ollama.com/install.sh | sh

From there you can test that it's running by navigating to the models list.

Note that the first time the LLM API is used it will download the configured model if it is not already downloaded. This will prompt the user before downloading and notify them when the download is done.

API 📜

window.llm.isSupported

if(await window.llm?.isSupported()) {
    // Use APIs here
} else {
    alert("This website requires Agregore's LLM API")
}

window.llm.chat

let messages = [
        {role: 'system', content: 'You are a friendly AI assistant that likes to ramble about cats'},
        {role: 'user', content: 'What is your favorite thing?'}
    ]

const {role, content} = await window.llm.chat({
    // this is mandatory
    messages,
    // this is optional
    maxTokens: 1337,
    temperature: 0.9,
    stop: ["cat"]
})

// Now you can loop and keep a convo history
messages.push({role, content})

window.llm.complete

const text = await window.llm.complete('The capital of Canada is', {
      // this is optional
    maxTokens: 1337,
    // remove this and use the default unless you know what you're doing
    temperature: 0.9, 
    stop: [" "]
})

Configuring ️✏️

You can configure your settings in your .agregorerc file which you can open with Help > Edit Configuration File.

{
  "llm": {
    "enabled": true,
    "baseURL": "http://127.0.0.1:11434/v1/",
    "apiKey": "ollama",
    "model": "phi3:3.8b-mini-4k-instruct-q4_0"
  }
}

IF you don't want pages to access this feature at all, set llm.enabled to false and it'll auto deny any requests.

Ollama Models

If you're curious to try out different models, check out the list available in the Ollama Library.

OpenAI

If your computer is very weak or if you're set on using the fancier cloud models, you can make use of OpenAI awnd their available models.

First you should replace the llm.apiKey config in your .agregorerc with an OpenAI API key, and then replace the llm.baseURL field with https://api.openai.com/v1/. You will also want to choose a model to use like gpt-4o-mini from the list on their website.