Spring Boot Integration with Ollama AI

To integrate Spring Boot with Ollama (an AI platform), you can create a Spring Boot application that uses Spring AI support for Ollama to interact with its AI models. Below is an example of how you might integrate Spring Boot with Ollama in a simple way.


1. Set Up the Spring Boot Application

Start by setting up a basic Spring Boot application. If you don't have one already, follow these steps:

  • Go to Spring Initializr.
  • Choose:
    • Project: Maven or Gradle (depending on your preference). I prefer Maven.
    • Language: Java
    • Spring Boot Version: Choose the latest stable version (e.g., 3.x)
    • Dependencies: Spring Web, Ollama 

Generate and unzip the project, then open it in your IDE.

2. Complete pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>3.4.1</version>
		<relativePath/> <!-- lookup parent from repository -->
	</parent>
	<groupId>com.example</groupId>
	<artifactId>demo</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<name>demo</name>
	<description>Demo project for Spring Boot</description>
	<properties>
		<java.version>21</java.version>
		<spring-ai.version>1.0.0-M4</spring-ai.version>
	</properties>
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.ai</groupId>
			<artifactId>spring-ai-ollama-spring-boot-starter</artifactId>
		</dependency>

		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-test</artifactId>
			<scope>test</scope>
		</dependency>
	</dependencies>
	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.ai</groupId>
				<artifactId>spring-ai-bom</artifactId>
				<version>${spring-ai.version}</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>
	<repositories>
		<repository>
			<id>spring-milestones</id>
			<name>Spring Milestones</name>
			<url>https://repo.spring.io/milestone</url>
			<snapshots>
				<enabled>false</enabled>
			</snapshots>
		</repository>
	</repositories>

</project>

3. Configure API Keys and URL (Optional)

If the Ollama service requires an API key or a specific endpoint URL, you need to configure these settings. Add the necessary configurations to application.properties or application.yml.

For example, in application.properties:

ollama.api.url=http://ollama-api-url  # Replace with actual Ollama API URL
ollama.api.key=your-api-key-here      # Replace with your Ollama API key

Alternatively, if you're using application.yml, configure it like this:

ollama:
  api:
    url: http://ollama-api-url  # Replace with actual Ollama API URL
    key: your-api-key-here      # Replace with your Ollama API key

4. Auto-Configuration Provided by the Starter

The spring-ai-ollama-spring-boot-starter likely provides auto-configuration for connecting to Ollama. The exact classes and services that are auto-configured will depend on the starter, so refer to the documentation of this starter for specific details.

Assuming that the starter provides an OllamaService or similar service, the service should be automatically injected into your application.

5. Create a Service to Interact with Ollama

Create a service class that will use the auto-configured OllamaService (or equivalent) to interact with Ollama. This service will handle communication with the Ollama AI API.

AiService.java:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.ai.ollama.OllamaService; // Replace with actual service class from the starter

@Service
public class AiService {

    private final OllamaService ollamaService;

    @Autowired
    public AiService(OllamaService ollamaService) {
        this.ollamaService = ollamaService;
    }

    public String getPrediction(String input) {
        // Assuming OllamaService has a method for prediction, replace with actual method
        return ollamaService.predict(input);
    }
}

In this example, OllamaService is assumed to be provided by the starter library. The predict method is a placeholder for whatever prediction or inference method the Ollama service provides.

6. Create a Controller to Handle Requests

Next, create a controller to expose an API endpoint that will allow external clients to interact with your AI service.

AiController.java:

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.*;

@RestController
@RequestMapping("/ai")
public class AiController {

    private final AiService aiService;

    @Autowired
    public AiController(AiService aiService) {
        this.aiService = aiService;
    }

    @PostMapping("/predict")
    public String predict(@RequestBody String input) {
        return aiService.getPrediction(input);
    }
}

This controller exposes a POST endpoint /ai/predict where you can send input data (e.g., text or other formats) for AI processing, and it returns the result of the prediction.

7. Run the Application

Now that everything is set up, you can run your Spring Boot application.

For Maven:

mvn spring-boot:run

8. Test the API

Once your application is running, you can test the AI prediction functionality. You can use tools like Postman or curl to send requests to your API.

For example, using curl:

curl -X POST http://localhost:8080/ai/predict -H "Content-Type: application/json" -d "\"Your input data here\""

This will send the input data to your /ai/predict endpoint, which will pass it to the AiService for processing via Ollama's API, and return the result.

9. Error Handling and Validation

You may also want to handle errors gracefully by adding appropriate exception handling and validation for the input.

Example for basic validation:

@PostMapping("/predict")
public ResponseEntity<String> predict(@RequestBody String input) {
    if (input == null || input.trim().isEmpty()) {
        return ResponseEntity.badRequest().body("Input cannot be empty.");
    }
    String prediction = aiService.getPrediction(input);
    return ResponseEntity.ok(prediction);
}

10. Additional Features and Customization

Depending on the features provided by spring-ai-ollama-spring-boot-starter, you can customize the service further. For example, you can configure timeouts, batch processing, or integrate with other Spring services like caching.

Get Your Copy of Spring AI in Action Today!

🚀 Don’t miss out on this amazing opportunity to elevate your development skills with AI.
📖 Transform your Spring applications using cutting-edge AI technologies.

🎉 Unlock amazing savings of 34.04% with our exclusive offer!

👉 Click below to save big and shop now!
🔗 Grab Your 34.04% Discount Now!

👉 Click below to save big and shop now!
🔗 Grab Your 34.04% Discount Now!

Popular posts from this blog

Learn Java 8 streams with an example - print odd/even numbers from Array and List

Java Stream API - How to convert List of objects to another List of objects using Java streams?

Registration and Login with Spring Boot + Spring Security + Thymeleaf

Java, Spring Boot Mini Project - Library Management System - Download

ReactJS, Spring Boot JWT Authentication Example

Top 5 Java ORM tools - 2024

Java - Blowfish Encryption and decryption Example

Spring boot video streaming example-HTML5

Google Cloud Storage + Spring Boot - File Upload, Download, and Delete