
Recent IA Work
Jul 1, 2024
3 min read
Saying "recent" IA work is a bit redundant, but these examples are from current client work, so they are really quite recent. :)
IA Powered Site Search/Chatbot
Over the years, prior to the popularity of LLMs, I worked on a number of Chatbot interfaces (Salvation Army, UC Health, American Bible Society, and others). Key to the functioning of these "dumb" chatbots was educating the user on how to best interact with them. The typical solution was a combination of interface cues and rails and simple examples showing the user what the bot was expecting.
I've recently been working on an internal project that leverages an LLM trained on a very specific client data set. The LLM is then leveraged behind the scenes to understand the user's intent, and then more traditional/controlled methods are used to generate the response. This allows the user to ask the question in any way that makes sense to them, while also controlling the output so that there is no chance that wrong or inappropriate content is presented. As of this writing, this work is being implemented for two different clients: one in the medical space and the other in a large university. In the medical implementation, the intention is to present an AI-assisted search to users, intentionally staying away from medical "Chatbot" territory. For the university, the solution will present to the user in a more traditional chatbot form factor.
Although the basic UI presented is essentially the same form factor as the old chatbots, the UI cues and onboarding have intriguingly different requirements. For IA chatbots, users still need guidance in order to get over the initial trust factor. They essentially aren't used to thinking of a chatbot as something they can simply have a conversation with, that they can trust to understand them. We have found that fewer UI rails and a bit more instruction are effective in getting initial engagement. However, with LLMs, users can build on what they have said. To leverage the benefit of an LLM solution, users need to be encouraged to continue engaging conversationally to get the info they want. To that end, I implemented a post-result prompt with LLM-driven examples to show the user the way forward. In informal testing, this has been highly effective.
Dynamic IA Generated Images for a Museum Scavenger Hunt
For this project, I used Stable Diffusion through ComfyUI to build a workflow that would use elements of artwork found in a scavenger hunt to generate abstract art as a reward. Users would collect key motifs from paintings they liked using an app we previously developed for the museum. AR and image recognition are used for the collection. When users collect enough motifs, they are prompted to rank the motifs they collected, and an abstract image is generated that contains recognizable forms of the collected motifs. The challenge was to give the diffusion model enough guidance to keep the supplied motifs recognizable while also giving it enough leeway to generate something that looked like art. We worked directly with the artists to identify the motifs they wanted to include and demonstrate the various outcomes. Including the artists in the process demystified the IA-generated art in a way that both calmed fears and got them excited about the project. In addition to the Stable Diffusion work, I was responsible for the UX workflow and interface for the app.