Author: Akshat Batra, a contributor to Google Summer of Code 2025
GSoC project link

Table of Contents

About the Project

Problem Statement

Solution Overview

Future Direction

Learnings and Experience

Relevant PRs

About the Project

This Google Summer of Code (GSoC) project focused on integrating AI into the web-based Template Playground. The goal was to enhance the user experience by providing intelligent assistance for working with Accord Project tools, creation of contracts and performing AI assisted modifications. This report outlines the project’s journey, key features developed, and future outlook.

Problem Statement

Accord Project’s Template Playground is a powerful tool for developing smart legal contracts. It serves as the introduction to Accord Project tools for many users. However, users, especially those new to legal tech, may face challenges in understanding the code that is used to create contracts in the Accord Project ecosystem, efficiently writing and modifying code, and troubleshooting errors within the templates. The absence of an intelligent assistant meant that users had to rely on external resources or manual efforts to resolve these issues, slowing down their workflow and making the learning curve steeper.

Solution Overview

Description

The AI Assistant is designed to address the aforementioned challenges by providing real-time, context-aware assistance directly within the Template Playground. It leverages Large Language Models (LLMs) to offer features such as code explanation, interactive chat, code application, error fixing, and inline suggestions.

Key Features

The AI Assistant is equipped with several key features to streamline the smart contract development process:

  • AI Assistant Chat Panel: A dedicated chat interface allowing users to interact with LLM of their choice via various providers (OpenAI, Mistral, Google, Anthropic, OpenRouter, and OpenAI compatible APIs). This panel facilitates general queries, explanations, and more in-depth discussions. The panel provides options to select the editors (Concerto, TemplateMark and JSON Data) that the user wants to include as context in the message sent to the LLM. It also provides prompt presets for faster creation of general queries like converting a given piece of text into TemplateMark or creating a Concerto model.
  • Code Selection Menu with Explain and Chat Buttons: When users select code within any of the editors, a hovering menu appears, providing options to “Explain” the selected code (opening an inline popup) or “Chat” about it (sending the code to the chat panel), the latter allows for asking follow up questions and further modifications in the code in general.
  • Apply AI-Generated Code: The chat panel includes a button at top of the assistant returned code blocks to apply the code generated by AI to the editors, enabling users to incorporate these suggestions into the editors. Thereafter, a code diff popup shows up which allows users to select which lines to accept or reject.
  • AI Fix for Errors: Whenever an error is detected in the problems panel, an “AI Fix” button appears, sending the error message to the chat window for the LLM to propose a solution.
  • Inline AI Suggestions: The editors provide inline AI suggestions for code completion, improving coding efficiency and reducing manual typing.

AI Assistant in Action

  • Chat Panel and Configuration: AI Assistant can be accessed via the sidebar in Template Playground. Users can start chatting with it once the LLM has been configured.
  • Code Selection Menu: The AI Assistant enhances the user experience by providing intelligent assistance throughout the template creation process. Imagine a user struggling to understand the syntax of a concept in their Concerto model. They can simply select the text, and the “Explain” button will provide an inline explanation. If they need further clarification or want to modify the concept, they can use the “Chat” button to engage in a conversation with the AI Assistant. Here, we query the explanation of Address concept.
  • Apply and Fix AI Generated code: Users can directly apply AI generated code from the chat panel. If an error is encountered in the user written/AI generated code, the “AI Fix” button will appear. Upon clicking it, the AI analyzes the error and suggests a correction, which the user can then apply with ease. Here, we provide Concerto model and JSON data as the context to AI Assistant and ask it to generate TemplateMark. The Assistant fixes the error in code leading to an error-free console at the end. The formatting of TemplateMark can be improved with further instructions from the user.
  • Inline AI Suggestions: AI Assistant also provides inline suggestions within the editors for faster code creation and removing the need to write repetitive tokens. Here, it suggests contents for EventTicket concept.

The Internal Working

The above image shows an abstract overview of the inner workings of the AI Assistant. Under the hood, AI Assistant’s functionality is built upon several core components:

  • chatRelay.ts: This is where messages are pre-processed and system prompts are injected before routing requests to the selected LLM provider, ensuring optimized interactions.
  • llmProviders.ts: This contains connectors for integrating multiple LLM providers, allowing for easy addition and management of various AI models.
  • prompts.ts: Centralizes and manages prompt templates used to enrich messages sent to LLMs, ensuring consistency and customization.
  • autocompletion.ts: This is the core component responsible for producing inline suggestions. It limits LLM usage by imposing a delay of at least 2 seconds between calls and ensuring that an LLM call is not made if the previous one is still being processed.
  • activityTracker.ts: Keeps track of the timestamp of the last user activity in the editors. This information is used by autocompletion.ts to trigger LLM calls for inline suggestions if the user has been inactive for 1 second.
  • AIChatPanel.tsx, AIConfigPopup.tsx,CodeSelectionMenu.tsx and CodeDiffPopup.tsx: These UI components provide the user interface for the main chat panel, configuration popup, menu with explain and chat buttons within editors and interface to view code diff and accept/reject AI generated code respectively. All of these allow the users to interact with the AI Assistant.

Future Direction

The integration of the AI Assistant is a significant step forward for the Accord Project’s Template Playground. However, there’s a lot of scope in improving the quality of the AI generated code. The quality heavily depends on the chosen LLM model. For optimal results, it is recommended to use advanced coding focused models. Major LLM providers don’t disclose their data sources for model training. The performance of an LLM depends on how much content/code related to the Accord Project was there in its training data.

The AI Assistant provides the users with the option to use any OpenAI compatible API, this allows the use of a model that is custom-trained to better understand and produce Accord Project related code.

However, before taking the route of training our own model, there’s some low-hanging fruit that can help improve the output while still using the available models. Accord Project already has quality documentation available, this can be used to dynamically enrich the prompts sent to the LLMs. The users could manually select the part of the documentation to include with @ selectors or we could use vector search to extract the relevant documentation to enrich the prompt. The latter would be a better experience for the user. Work is already going on in this area.

Learnings and Experience

This GSoC project provided invaluable experience and helped to polish my skills in several areas:

  • LLM Integration: Gained hands-on experience integrating and managing various LLM providers, understanding their APIs, and optimizing their usage.
  • Front-end Development: Enhanced skills in developing interactive user interfaces with React and managing complex state.
  • Code Quality and Best Practices: Learned the importance of writing clean, modular, and maintainable code, as well as the significance of thorough testing.
  • Open Source Collaboration: Experienced the dynamics of working within an open-source community, including contributing to a large codebase and collaborating with mentors and other developers.
  • Problem-Solving: Developed strong problem-solving skills by tackling complex technical challenges and devising innovative solutions.

I would like to express my gratitude towards my mentors Mrs. Diana Lease, Mr. Timothy Tavarez and Mr. Niall Roche for helping me along the way. A special thanks to Mr. Sanket Shevkar for enabling smooth collaboration with the redesign project. Thanks to Mr. Matt Roberts and Mr. Dan Selman for being there to provide resources and feedback.

Relevant PRs

The following Pull Requests (PRs) represent the core work accomplished:

Link Title
#385 feat: AI Assistant Foundational Features
#397 feat: hover menu with explain and chat buttons for editor content
#400 feat: apply AI generated code and fix errors in code using AI
#402 feat: inline AI suggestions inside code editors