Consider a knowledge bank the memory that fuels conversational AI. And it holds all the data your AI chatbot requires in order to be able to answer questions correctly. In other words, much like a well organized library, it isn’t about how many books you have but rather how fast you can find the one you are looking for when you need it. A knowledge bank is what makes a difference between your conversational AI being a helpful assistant or an inane mess that irritates users.
The idea is simple sounded: feed your AI more information; it will synthesize that information to answer questions. But the truth is far more complicated. How you format and arrange it—and present this data—has a massive effect on how well your AI will perform. There is not enough bandwidth for too much information to pass into the system. Inadequate, and it is simply unable to answer simple questions. It’s getting it just right that’s the real trick.
What it means for customers:
A precisely tuned knowledge bank allows customers to find accurate, relevant answers at once. Say goodbye to confusing answers or information overload. Conversations get easier and more useful, so the AI knows exactly what they want. When a customer inquires about a feature, that’s all they receive: no walls of text detailing anything remotely related to what the customer requested information on.
The experience is natural and conversational, as though they’re speaking with an informed team member who knows exactly what they mean. Customers won’t need to repeat their questions multiple times or contend with awkward phrasing in order to get what they want. This smooth journey builds trust and delight, ensuring that customers are engaging your AI support tools instead of running away from them in frustration.
What it means for teams:
The smarter your AI system can work, the more efficient its knowledge bank is. Your teams spend less time cleaning up errors and doing work that doesn’t meet your goals, and more time on strategic stuff. Your team will be able to focus on complex and specific customer needs that require a human touch, because the AI is able to handle basic questions accurately. They’re not always policing the chatbot and correcting it or stepping in when it supplies incorrect information.
AI ensures that Sales can confidently trust on lead qualification, and that accurate product details are shared without concern that wrong information could be passed to prospects. Marketing teams no longer have to be afraid when launching AI-driven campaigns because they know the message is accurate and on-brand! The whole organization gains from an AI that really helps and doesn’t generate more work. Teams scale interactions with customers without scaling head count, which makes operations more efficient and cost-effective.
The Challenge We Faced
We began developing an AI chatbot for a client. They had lots of data — products, services, team members, contact information. The purpose was modest: A customer ought to be able to ask the chatbot questions and receive answers. The client had been collecting data for years, and building complete databases that included every part of their business. They were hyped about what AI could do and they thought they had a great data set that would make implementation straightforward.
The reality? The chatbot was unable to provide correct answers. With all the data it could ever need on paper, though, it was also an open secret that the AI would come up with answers that were a combination of partially right and mostly wrong, or so vague as to be unhelpful. And now and then it would confidently affirm something that was diametrically opposed to the truth. She continued to press me but I was out of time, and the knowledge just wasn’t in my brain for some reason — it probably didn’t help that Alexa would sometimes say it didn’t know when asked questions found clearly within the knowledge bank.
“I had customers ask about items and then receive information on something entirely different by the same name. They would ask about hours and information on the company’s history. The gulf between what users asked and the creepy, incorrect output from the AI was frustrating for all parties. The client was right to be anxious—they’d purchased AI with the expectation that it would a) enhance customer experience and b) reduce the requirement for human assistance, yet instead was causing even greater confusion along with demanding significantly more human intervention than before.
We knew we were going to have to figure out what was going wrong — and fast. The client’s patience was running out, and with each day the chatbot proved to be unreliable, it couldn’t bring the value that it promised.
What We Tried (That Failed)
First try: We believed more detail equals better results. This made sense—If the AI knew everything about anything, then it should be able to answer any question correctly. We went all-out on each entry (of about 15-20 lines), from every point of view. Every item contained descriptions in both English and Bengali, as well as long lists of keywords, variations on phrasing, alternative names for things, related concepts or terms, background about when and how to use that particular piece of information.
We had been working for weeks imprinting those tasks with all the details, thinking depth was the solution to the accuracy problem. Its entries were mini-documents, more than just the staples of facts but with context, usage notes and cross-references to related information. We thought we were being helpful by providing the AI with everything it might want.
The result? The A.I. got overwhelmed by all that data. Accuracy stayed at just 40-50%. The chatbot would zero in on random words from the long entries to give factually correct but unhelpful responses. It couldn’t discern the important versus the supportive, it assumed all of these things were equally crucial. Users were provided with long, rambling responses that bore some relationship to their question but rarely came right out and answered it directly.
Attempt two: We put together a training manual with 50+ model questions in three languages. We reasoned that if we taught the AI how to respond precisely to certain questions, it would then learn the patterns and generalize them to new queries. We wrote example conversations including the happy path and as many variations and possibilities for how a user might phrase what they want to do in there. We translated everything into three different languages to make sure that the AI could accommodate a variety of users.
This approach seemed promising initially. We were teaching the AI by example, not only showing it what information was available but how to deploy that knowledge in conversation. We spent a lot of time developing these sample interactions, trying to think through every way a user might ask about a topic.
The library of information now extended over 10,000 lines. We had no idea we were creating this huge problem.” The AI was even more puzzled. Accuracy dropped to 35-40%. Instead of general principles from the examples, AI began to take them as templates. It would pick up only questions that highly resembled the example phrasings, and anything even slightly different threw it. The system was brittle, unable to accommodate human-style variation on the way real users communicated.
Even worse, the A.I. occasionally mixed multiple example responses to get hybrid answers that made no sense. The sheer volume of examples generated noise that obscured the real information we wanted the AI to learn from.
Last ditch: We gave them the information three different times, in two additional formats—CSVs and paragraphs—and with diagrams. We figured that maybe the A.I. was just having a hard time because it didn’t have the perfect format, so we would give it choices.” Perhaps it would do better if some queries were given structured CSV data, while for others you had to use natural language paragraphs. We introduced JSON for programmatic accuracy and even designed visualizations to express connections between several pieces of information.
This was the worst approach. And yet the data redundancy reached its zenith and the AI was thoroughly confused. Accuracy fell to 30-35%. The machine didn’t know which version of the information to use. It would occasionally fetch information in one format for some of an answer, while drawing from a different kind of response for the rest, creating discontinuities. At other times the AI would notice that a lot of sources were telling them the same thing, and it would burn up processing power trying to patch together information that was already consistent.
The diagrams, intended to disentangle relationships, only added another layer of data that the A.I. had to parse with more vague instructions on when or how to deploy them. We'd managed to achieve the worst of both worlds — information overload and structural chaos. The chatbot had become so poor it was almost unusable.
“At this point we had to take a step back, and forget everything that we thought was impossible,” added Okojie. Everything we did based on received wisdom — more data, more examples, more formats — had made the problem worse. We needed a breakthrough.
The Breakthrough: Doing the Opposite
We radically went in a different direction — less. This seemed counterintuitive after all of our effort to create complete data sets, but we had no choice. We had to do something completely different. The revelation was that the AI system worked best when it received clear, unambiguous data. Each time we added an element of complexity, performance got worse.
We condensed 15 to 20 lines of information into single key lines. We found the core fact or facts that mattered in each entry and stripped away everything else. No examples, no variants, no fiddly context-fluff — just the hyper-concentrated data your brain needs to answer questions. A product description changed from a section of features, benefits, applications and keywords to 1 sentence with the name, major capability and top specs.
It has been a difficult compression because we have had to really consider what information was actually necessary. We needed to tell the competition committee about what was a luxury versus essential for the AI to produce an answer that was correct. A lot of the time, 90% of what we were putting on it didn't need to be.
We encoded information from the knowledge store into a terse master prompt. Instead of being a rich bank of knowledge, it was whittled down to a thin fact sheet. The master prompt, or the core set of instructions that feed how an AI will behave, fed this condensed data to the model in a highly structured form that allowed the AI to access and use specific bits of information with ease.
The Results
The change was dramatic. Knowledge data set size is reduced from over 10,000 to only a few hundreds. We had gone through 97% of the content, deciding what we absolutely must keep. This wasn’t cutting for the sake of cutting — this was deleting redundancy, culling examples that should have been instructions, and formatting that just didn’t add value.
But where the magic really came was in accuracy. It went from 30-50% immediately up to 80-85%. Today, the chatbot provides almost flawless responses. Users began receiving answers that were explicitly on their questions, and based only on relevant information – no more – no less. The response time increased because the AI didn’t have to swim through thousands of lines of irrelevant data to get what it wanted.
Customer satisfaction scores shot up. Before, users rated the chatbot very poorly and dropped off from it to find human support; now they were reaching successful completions. The chatbot took care of questions I knew could be easily answered, and when it escalated to human support, it did for something genuinely difficult that only a person can judge.
The client was thrilled. They had gone from an underperforming AI implementation to a very effective one without changing anything about the data or using more sophisticated AI models. We had just given the AI that already existed the information it asked for, in a format you could actually use. It proved everything we had learned through all our failed experiments — more isn’t better, clarity was better.
What We Learned: Technical Insights
- More is not always better. For conversational AI less explicit but targeted information works better than more, covering polluted data.
- The lesson is to treat your knowledge bank as a reference, not a training set.
- Separate knowledge from instructions.
- Structure matters more than format.
- Test with real queries. Stuffing your knowledge bank with sample Q&As won’t properly prepare your AI system for real customer inquiries.
The Bottom Line
The “less is more” approach isn’t just a mantra of conversational AI — it has been shown to be effective.