SouthEast Linux Fest 2024

Getting Started with Ollama
06-08, 15:00–15:50 (EST5EDT), AlmaLinux (BallroomC)

Accessing LLMs through cloud services can be expensive and pose challenges related to data privacy and latency. To address these issues, Ollama is an open-source
engine designed to help users get up and running with large language models locally.


Large language models have gained significant attention in recent years due to their ability to generate human-like text, drive innovative applications, and power advanced AI systems.
However, accessing these models through cloud services can be expensive and pose challenges related to data privacy and latency. To address these issues, Ollama is an open-source
engine designed to help users get up and running with large language models locally. In this presentation, we will explore the basics of Ollama, its installation process, and best
practices for using it.