Introducing AI Assistants in eazyBI
As our name suggests, we are a BI (Business Intelligence) solution. But nowadays, everyone is talking about a different letter combination AI that is a different type of intelligence. Many are using AI to get investor money or to get their stock valuation higher. We do not have such pressure at eazyBI, and therefore, we thought about what could be useful and practical applications of new AI technologies for our users.
The current AI technologies primarily use LLMs (large language models), which have been trained on a lot of text from the internet and now can read and write similar texts quickly. Therefore, the most popular AI usage is for text generation from a prompt, summarization, or rewriting of texts. In most cases, the output should not be exactly “precise” or “correct”: it just should sound human-like. We can say the same about image generation from prompts — the generated images should look approximately related to the prompt we used; however, we are not “compiling” and “executing” them to validate the results.
From user questions to data queries
In BI reports and charts, we have a more challenging goal. If a user asks a question about their business data, we do not want to give just some report or chart that “looks” nice. We want to return the correct results, and we want the user to be able to validate that it is what was asked.
In this area, there are many examples of how LLMs can translate user questions about data to SQL queries. For example, a user might ask:
Show created vs resolved issues by month in last 12 months
and might get the following generated SQL query and the corresponding chart of created and resolved issues by month:
If the user doesn’t understand SQL, they might trust the results. But if we understand SQL, we can see that it probably is not what we expected—it groups all issues by their creation date and then shows how many of them are currently unresolved or resolved. However, the user most probably expected that the resolved issue count would be shown by the resolution date.
These examples of SQL generation—sometimes correct and sometimes wrong—inspired us to think about how we could do something similar in eazyBI. In our case, we already have a multi-dimensional data model of measures and dimensions that better match business questions about data and hide technical implementation in SQL queries.
Report builder assistant
As a first task, we started with a report builder assistant in the Analyze tab. We already have many sample reports and charts that users can explore, find the one that most closely matches their needs, and customize them. When you want to create your first custom report from scratch, you might be challenged with the “blank page”—there are so many measures and dimensions that you can select; how should I start? Now, you can open the report builder assistant panel (see the new Assistant button on the top right corner) and try either sample prompts or write the question that you would like to be answered.
Let’s try the same query about created and resolved issues by month.
This question was correctly translated to eazyBI measures “Issues created” and “Issues resolved”, and the correct Time dimension level and page filter were applied. You can continue the chat with the assistant and ask to modify the created report or chart. You can also ask to explain the measures that are available to understand better what each of them will return.
MDX formula assistant
Our eazyBI slogan is “Simple things easy, complex things possible”. Creating reports and charts is typically easy by using drag-and-drop and clicking through options. Later, you can start using calculated members with custom MDX (multi-dimensional expression) formulas that make more complex reporting requirements possible.
We have an extensive documentation site about how to write MDX formulas and descriptions of all available MDX functions. Sometimes, finding the right documentation page to solve your specific calculation needs can be challenging. Therefore, we are introducing the MDX formula assistant sidebar in the calculated member definition dialog. You can ask the assistant what formula you would like to write, and they will get back the suggested MDX code and, in addition, links to the corresponding documentation pages with more details.
If the generated formula matches your needs, you can copy and use it in your calculation. If not, you can try to regenerate the response, which might suggest alternative solutions. Or even better, continue the chat and explain in more detail how you would like to improve the calculation.
Just like with the generated SQL example, you should still learn to read and understand the generated MDX code to understand how it works. Assistants can help with that, too—select the code you would like to understand and ask the assistant to explain it!
Calculated custom field assistant
Instead of doing complex calculations in MDX every time when the report is executed, you can precalculate additional custom field measures or dimensions during the data import. You can use custom JavaScript code to extract the necessary data from the Jira issue JSON representation, do the calculations, and return the required result. This is also the “complex things possible” side of eazyBI. Therefore, to make it easier, we are adding custom field JavaScript code assistants as well.
Explain to the assistant what data you would like to extract from the issue, copy the generated JavaScript code, and test it with some issues. Now, this is much easier, isn’t it?
How does it work?
As you might guess we are using LLMs (large language models) to get responses to user questions. But if you would ask the same questions to, for example, ChatGPT you would not get the same good answers. We are helping LLMs with additional knowledge and examples to make the responses relevant to each specific eazyBI use case.
When we receive the user question, we first classify it. For instance, we determine whether it is a request to build a report, to explain a measure, or a general question about eazyBI functionality.
Based on the type of question, we search for documentation examples, standard measures, and dimensions that are available, as well as sample report definitions, which we should include in the context to get better answers from LLM. We use vector-based semantic search to find similar examples by their semantic meaning and not just use the exact words from the user prompt.
Then, based on the question classification, the corresponding LLM instructions template will be used, including the context data with examples from the previous step. Instructions also describe in detail, how eazyBI report definitions look or the structure of Jira issue JSON representation.
We have implemented interfaces with several leading LLM providers so that we can test and select which currently provide the best results for our needs. As we are using the Google Cloud Platform for eazyBI Cloud, we have chosen Google Cloud Vertex AI APIs, which provide access to the latest Gemini model by Google and the Claude 3 model by Anthropic.
After receiving the response from LLM, we check and fix the known errors in the generated eazyBI report definitions. LLMs can still make mistakes, and we want to reduce cases when we present invalid report definitions that fail to load.
It is important to note that customer data imported into eazyBI are never sent to LLMs and are not exposed to any external services. Only user prompts and metadata about measures and dimensions are used in instructions, but the generated report definition is executed the same way as any other report created by users.
What assistants can do, and what they can’t?
First, keep in mind that assistants, like any other current AI technology, can make mistakes and you should validate generated reports and formulas. They can make you more productive, but you still need to understand your data and validate report results.
Assistants (and we as we build them) are still learning. We start with the initial set of questions that we imagine users might ask. As we are now launching them to a broader audience, there may be many questions where provided answers are not as good as expected. We will review the usage, identify new sets of common questions, and improve the knowledge that we are feeding to assistants and the instructions we provide.
You can rate the answers you receive with thumbs up and down and provide additional comments. Please use it to provide feedback, as it will help us continue improving the assistants.
Please remember that assistants can’t read your mind, so tell them exactly what you want. If you provide limited details, they will try their best to answer using the known examples, but it might not match your needs.
And finally—assistants are available 24x7 and are ready to help you any time. But our support is still more intelligent and eager to help you when you have more challenging reporting requirements. Please do not hesitate to ask for additional help from our support when the assistant can’t give you the right advice.