Enterprise Java And Quarkus
The Secret Weapon for Java Developers Building with AI
Discover how prompt engineering can dramatically improve your interactions with LLMs like local Granite models.
Artificial intelligence is changing how we develope applications. The one single, most important concept we need to understand is prompt engineering. It became one of the most important skills for developers. As Java developers integrate Large Language Models (LLMs) into their applications, the ability to create effective prompts is becoming a powerful tool in our toolbox. This article explains the core principles of prompt engineering and offers practical techniques for Java developers to improve their interactions with LLMs, particularly when integrating with local Granite models using LangChain4j.
Understanding Prompt Engineering
Prompt engineering is the process of designing input, known as prompts, for AI models to achieve the desired output. Well-designed prompts guide the model's behavior, leading to more relevant and accurate responses. For Java developers, proficiency in prompt engineering can significantly enhance the performance of applications that incorporate AI, enabling the creation of sophisticated solutions such as intelligent assistants or automated workflows.
Key Techniques in Prompt Engineering
Let's examine these techniques with practical Java examples, as if we are developing a "Smart Issue Reporter" that categorizes user-reported problems using a locally served Granite model.
1. Clear and Concise Instructions
Be Specific: Clearly defining the desired action for the model is crucial. Ambiguous prompts can result in responses that are off-topic or inaccurate. For our issue reporter, the instruction to categorize needs to be unambiguous.
String prompt = """
You are an issue categorization system. Your task is to classify the following user-reported problem based on its description.
Provide the category as a single word: "Bug", "Feature Request", "Documentation", or "Other".
User Problem: '%s'
""";
String userProblem = "The 'Save' button on the user profile page does not seem to store the changes I make.";
String formattedPrompt = String.format(prompt, userProblem);
// When interacting with the LLM using LangChain4j, we would use 'formattedPrompt'
In this example, the prompt clearly states the model's role (issue categorization system), the task (classification), the expected output format (single word), and lists the possible categories.
Use Examples: Supplying examples within the prompt can greatly improve the model's understanding of the task and the expected format of the output.
String promptWithExamples = """
You are an issue categorization system. Your task is to classify the following user-reported problem based on its description.
Provide the category as a single word: "Bug", "Feature Request", "Documentation", or "Other".
Here are some examples:
User Problem: The application stops working when I try to open a large CSV file.
Category: Bug
User Problem: I would like the application to have a dark theme.
Category: Feature Request
User Problem: I do not understand how to set up the email notifications.
Category: Documentation
User Problem: '%s'
Category:
""";
String userProblem = "I am getting an error message saying 'NullPointerException' when I click the 'Submit' button.";
String formattedPromptWithExamples = String.format(promptWithExamples, userProblem);
// This prompt with examples helps the Granite model classify correctly.
By including these examples, we illustrate the relationship between the input (user problem) and the desired output (category), making it easier for the Granite model to accurately classify new, similar problems.
2. Contextual Prompting
Provide Context: Including relevant background information can lead to more accurate responses from the model. For our issue reporter, knowing the specific area of the application the problem pertains to can be very helpful.
String promptWithContext = """
You are an issue categorization system for the 'User Management' section of our application. Your task is to classify the following user-reported problem based on its description.
Provide the category as a single word: "Bug", "Feature Request", "Documentation", or "Other".
User Problem: '%s'
""";
String userProblem = "I am unable to reset my password using the 'Forgot Password' link.";
String formattedPromptWithContext = String.format(promptWithContext, userProblem);
// The added context about the 'User Management' section helps the model understand the problem better.
Providing details such as the specific application section allows the Granite model to focus its understanding and potentially offer a more precise classification.
Maintain Context: While not directly within a single prompt, in practical applications, user interactions might involve multiple turns. Techniques for storing previous exchanges or utilizing LangChain4j's memory features can help preserve context throughout these interactions. This allows for more nuanced problem reporting and classification as the user provides more details.
3. Zero-Shot and Few-Shot Learning
Zero-Shot Learning: The initial prompt examples, where we directly ask the model to classify problems without giving specific examples of problem classification, demonstrate zero-shot learning. The model uses its existing knowledge to understand and perform the task.
Few-Shot Learning: The
promptWithExamples
clearly shows few-shot learning. By providing a small set of labeled examples of user problems and their corresponding categories within the prompt, we guide the Granite model to better grasp the task and improve its accuracy when dealing with new, similar problems.
4. Chain-of-Thought Prompting
Break Down Complex Tasks: For more complex situations, we might want the model to first reason about the problem before giving the final classification. This can be achieved by dividing the task into smaller steps within the prompt.
String promptWithReasoning = """
You are an issue categorization system. Your task is to first identify the component of the application most likely involved in the following user-reported problem, and then classify the problem.
Provide your response in the format: "Component: [Component Name], Category: [Category]".
User Problem: '%s'
""";
String userProblem = "Following the latest update, I can no longer log in to my account.";
String formattedPromptWithReasoning = String.format(promptWithReasoning, userProblem);
// This prompt encourages the model to think step by step, potentially leading to a more accurate classification.
By prompting the model to first pinpoint the affected component, we encourage a more structured thought process that can lead to a more accurate final classification.
Encourage Reasoning: We can explicitly ask the model to explain the reasoning behind its classification.
String promptWithExplanation = """
You are an issue categorization system. Your task is to classify the following user-reported problem and briefly explain your reasoning.
Provide the category as a single word ("Bug", "Feature Request", "Documentation", or "Other") followed by your reasoning in one sentence.
User Problem: '%s'
""";
String userProblem = "The application does not have the ability to export data to a CSV format.";
String formattedPromptWithExplanation = String.format(promptWithExplanation, userProblem);
// Requesting an explanation helps understand the model's thought process and can be useful for refining prompts.
Asking for an explanation not only provides the category but also offers insight into the model's decision-making process. This can be valuable for understanding the model's behavior and identifying areas where the prompts could be improved.
Okay, I will add a new section to the article covering the enhanced prompting approaches and result formatting options we discussed. Here's how it will be integrated:
Enhanced Prompting and Result Formatting
Beyond the foundational techniques, several enhanced prompting approaches can further optimize interactions with Large Language Models, particularly the larger, more capable foundation models. Additionally, understanding how to guide the model to format its output appropriately is crucial for seamless integration into Java applications.
Enhanced Prompting Approaches
Role-Playing and Persona Prompting: Larger LLMs can effectively adopt specific roles or personas when instructed. By telling the model to act as a particular expert or character, you can influence the style and content of its responses. This can be useful for eliciting more tailored advice or creative content.
String prompt = """
You are acting as a highly experienced Senior Java Architect.
Given the following scenario: we need to build a RESTful API for an e-commerce platform. What are the key design considerations for scalability and security?
""";
Coaxing and Refinement Prompting: Achieving the desired output might involve an iterative process. Starting with a general prompt and then refining it based on the model's initial response can lead to more precise results. This could involve adding constraints, requesting more detail, or asking for alternative perspectives.
// Initial prompt
String prompt1 = "Explain the concept of microservices.";
// ... LLM response ...
// Refinement prompt
String prompt2 = "Provide a Java-based example of inter-service communication using REST.";
Knowledge Retrieval Augmentation (RAG) Prompting: This powerful technique involves providing the LLM with external, relevant information within the prompt or just before generating a response. This allows the model to answer questions or perform tasks based on up-to-date or domain-specific knowledge it might not have been trained on. LangChain4j offers tools to facilitate RAG in Java applications.
String context = "According to the company's internal documentation, the new authentication service uses OAuth 2.0 with JWT tokens.";
String promptWithContext = """
Based on the following information, what authentication protocol is used by the new authentication service?
%s
""".formatted(context);
Prompt Chaining: Complex tasks can be broken down into a sequence of prompts, where the output of one prompt serves as the input for the next. This allows for multi-step reasoning and task completion. LangChain4j provides mechanisms to create and manage these prompt chains in Java.
Conditional Prompting: You can design prompts that lead the model to behave differently based on specific conditions outlined in the prompt.
String prompt = """
You are a code reviewer. Examine the following Java code. If there are any potential performance issues, point them out. Otherwise, state "No performance issues found."
Code:
%s
""";
Result Formatting
Guiding the LLM to format its output in a structured way is essential for programmatic use in Java applications. Larger models often have improved capabilities in this area.
JSON Output: Instructing the model to return its response as a JSON object is highly useful for data serialization and parsing in Java. You can often specify the desired schema.
String prompt = """
Extract the name and version of the following Java dependency and return the result as a JSON object with keys "name" and "version":
Dependency: com.google.guava:guava:31.1-jre
""";
Libraries like LangChain4j can assist in defining output schemas and automatically mapping the LLM's JSON response to Java objects.
XML Output: In scenarios where XML is the preferred format, you can prompt the model accordingly.
String prompt = """
Extract the title and author from the following book information and return the result as an XML document with <book> element containing <title> and <author> tags:
Title: The Hitchhiker's Guide to the Galaxy, Author: Douglas Adams
""";
Code Blocks with Syntax Highlighting: When generating code snippets, you can instruct the model to enclose the code in Markdown code blocks with the appropriate language identifier for syntax highlighting.
String prompt = """
Write a simple "Hello, World!" program in Java and enclose it in a Markdown code block.
""";
Delimiter-Based Formatting: For simpler structured data, you can ask the model to separate items using specific delimiters like commas or semicolons.
String prompt = """
List the steps to compile and run a Java program using the command line, separated by semicolons.
""";
Note on Model Capabilities:
The effectiveness of these enhanced prompting techniques and the ability to generate specific output formats can vary depending on the underlying LLM. Larger and more advanced models generally exhibit better performance with complex instructions and structured output requests compared to smaller, locally run models. When choosing a model for your Java application, consider the complexity of the tasks and the desired level of control over the output format.
Practical Applications in Java
Integrating with LangChain4j for Issue Reporting
LangChain4j simplifies the interaction with LLMs like our local Granite model. We can use the prompt engineering techniques discussed to build our Smart Issue Reporter. For example, we could use the promptWithExamples
to get an initial classification.
import dev.langchain4j.model.language.LanguageModel;
import dev.langchain4j.model.ollama.OllamaLanguageModel;
public class IssueClassifier {
public static void main(Stringargs) {
LanguageModel llm = OllamaLanguageModel.builder()
.baseUrl("http://localhost:11434") // Replace with your Ollama URL
.modelName("granite-code") // Or your chosen Granite model
.build();
String promptWithExamples = """
You are an issue categorization system... (as defined before) ...
""";
String userProblem = "The application unexpectedly closes when I switch between different views.";
String formattedPromptWithExamples = String.format(promptWithExamples, userProblem);
String category = llm.generate(formattedPromptWithExamples).content();
System.out.println("Problem Category: " + category);
}
}
Tool Calling for Issue Creation
Building on the classification, we could use tool calling (as mentioned earlier) to automatically create a ticket in an issue tracking system based on the classified problem and the user's description. This would involve defining a tool (a Java function) that interacts with the issue tracking system and prompting the LLM to use this tool after successful classification. The prompt would need to be designed to guide the LLM to extract the necessary information (category and description) and invoke the appropriate tool.
Best Practices
Iterate and Refine: It is important to continuously test different prompts with various user inputs to observe the Granite model's performance. Analyze the results and adjust your prompts to enhance accuracy and relevance. For instance, you might discover that adding more specific examples for particular types of problems improves classification.
Monitor Outputs: It's essential to keep a close watch on the categories generated by the model. Ensure they align with the intended categories and handle any unexpected or incorrect outputs. Implementing logging and metrics can be beneficial for this monitoring.
Stay Updated: The field of prompt engineering is constantly advancing. Keep informed about the latest research and techniques to continually improve your prompts and take advantage of new features in Granite models and LangChain4j.
Conclusion
Prompt engineering is a key skill for Java developers integrating LLMs into their applications. By understanding and applying techniques like clear instructions, contextual prompting, utilizing zero-shot and few-shot learning, and encouraging chain-of-thought reasoning, developers can greatly improve the performance and effectiveness of their AI-powered solutions. In the context of a Smart Issue Reporter, well-crafted prompts enable accurate problem classification, which can then be further automated through tool calling, leading to more efficient and intelligent applications.
Subscribe to Enterprise Java And Quarkus
By Markus Eisele · Launched a month ago
I am a technology leader with a focus on Java and open-source platforms. With over 20 years of experience guiding organizations from monolithic systems to microservices, I am a Java Champion and work as technical marketing manager at Red Hat.