Skip to main content

AI

linqi offers various AI functions. If you want to use them in an on-premise installation, you will need appropriate hardware or an API connection.

Overview

linqi processes AI tasks through a queue system. This means that when a new AI task (e.g., reading data, generating text, etc.) is triggered by a process, linqi adds a corresponding task to the queue.
In the background, linqi then uses one (or several) services that process these tasks and return the result to linqi. This service can run on a server that has the required hardware for AI tasks.

Jobworker

The Jobworkers execute AI tasks via OpenAI-compatible APIs. For on-premise setups, it is recommended to host, for example, a llama-server, ollama, or vLLM on the same machine.
Alternatively, you can also directly connect to services such as OpenAI. Note: In this case, costs per request will apply.
OCR is performed using a model that is executed directly by the Jobworker.

Depending on the workload, you can use NPUs or GPUs as hardware.