Booz Allen Hamilton – RFQ LLM GPT

BoozAllenHamilton Customer Logo
Booz Allen Hamilton is primarily known for its expertise in consulting, analysis, and engineering services, particularly in the fields of government contracting, intelligence, AI, and digital transformation. I was fortunate enough to have an engagement at BAH and left with 2 major successes including launching an internal tool aiding in RFQ first draft responses.

At Booz Allen Hamilton, the federal project RFQ response process was taking 4-6 weeks for initial drafts to be completed. Each proposal manager maintained their separate library of past RFQs, creating information silos that reduced our ability to learn from previous successes and failures. The company needed a more efficient, data-driven approach to creating RFQ responses.

As Technical Product Manager, I led the development of an LLM-based tool to automate the first draft of RFQ responses. This required gathering scattered RFQ documents, creating a central repository, and developing a system that could learn from our past performance to generate high-quality initial drafts.

First, I mapped out the existing RFQ response process by conducting interviews with proposal managers across various departments. This revealed that successful RFQ responses often followed similar patterns; however, this knowledge was not being shared effectively. I discovered that proposal managers had collections of RFQs dating back several years, stored in various formats and locations. To create a unified document repository, I developed a standardized format for RFQ documentation. Working with proposal managers, we categorized past RFQs based on project type, agency, outcome, and key success factors. This process uncovered over 500 historical RFQ responses that would serve as training data for our large language model (LLM).

The technical requirements for the LLM tool needed to balance several factors. The system had to maintain security for sensitive government project information, accurately parse complex federal Request for Quotation (RFQ) formats, and generate responses that matched federal procurement language patterns. I wrote detailed specifications for document processing, security controls, and output formatting.

Training data preparation was crucial. I collaborated with senior proposal managers to identify our most successful RFQ responses and analyze the factors that contributed to their success. We also examined unsuccessful bids to understand common pitfalls. This analysis informed the LLM’s training parameters and contributed to the development of evaluation metrics for generated responses.

Security requirements were stringent since we handled sensitive government project information. I specified controls for data access, user authentication, and audit logging. The system needed to track which historical RFQs were used as references for each new response, maintaining compliance with federal contracting regulations. For security controls, this LLM tool operated entirely on internal servers and did not utilize internet connections to models such as ChatGPT.

The user interface design focused on the workflows of proposal managers. I created specifications for features like RFQ document upload, response generation controls, and collaborative editing tools. The system needed to explain its response rationale by citing relevant past RFQs, helping proposal managers understand and refine the generated content.

Throughout the development process, I conducted pilot tests with various proposal teams. Their feedback helped refine the system’s output format and accuracy. We found that the tool performed exceptionally well for the first draft response. Any inaccuracies were considered minor by the stakeholders, as the RFQ process included four strict review steps with various committees that would capture and adjust the responses through human review.

The LLM tool transformed our RFQ response process. Initial draft generation time dropped from 4-6 weeks to just 3 hours. The quality of first drafts improved significantly, with proposal managers reporting that generated responses required 80% fewer revisions than manually created drafts, as most RFQ questions were essentially the same across projects.

The centralized RFQ repository became a valuable knowledge base. Proposal teams now had access to successful response patterns across different government agencies and project types. The tool saves approximately 1,000 staff hours per month. The standardized approach to proposal generation also improved consistency across various departments and offices.

Beyond the immediate efficiency gains, the project created lasting organizational change. Proposal teams now regularly contribute to the RFQ knowledge base, sharing insights and successful strategies. This cultural shift from information silos to collaborative knowledge sharing strengthened our overall proposal capabilities.

This experience showed how AI tools can transform established business processes while building organizational knowledge. The project’s success stemmed from carefully balancing technical capabilities, user needs, and security requirements, while maintaining a focus on the core goal of improving proposal quality and efficiency.