The application consumes models from an Ollama inference server. You can either run Ollama locally on your laptop, or rely on the Arconia Dev Services to spin up an ...