How to Set Up Citations and Trusted Answers in Retrieval-Augmented Generation Chats
Become the Trusted Expert: Elevating Your Information with AirgapAI
In today’s fast-paced digital landscape, information is power, but trusted information is invaluable. Imagine being the professional who always shows your work—every answer supported by clear, traceable evidence. This isn't just about accuracy; it's about building undeniable confidence and transparency, especially for critical roles in analysis, customer support, and legal review. You’re not just providing answers; you’re providing answers with receipts.
Welcome to the world of AirgapAI, where you can empower yourself and your team to harness Artificial Intelligence (AI) with complete traceability and security. This comprehensive guide will walk you through setting up and leveraging AirgapAI to deliver highly accurate, cited responses within your Retrieval-Augmented Generation (RAG) chats, ensuring every piece of information is backed by a verifiable source. With AirgapAI, you gain trust through traceability, all within a completely offline and secure environment.
The Foundation: Understanding Key Artificial Intelligence Concepts
Before diving into the specifics of AirgapAI, let's establish a baseline understanding of some fundamental Artificial Intelligence concepts. Even if you know absolutely nothing about AI, we’ll guide you through.
What is Artificial Intelligence (AI)?
At its core, Artificial Intelligence is a broad field of computer science that gives machines the ability to perform tasks that typically require human intelligence. This includes things like learning, problem-solving, understanding language, recognizing patterns, and making decisions. Think of it as teaching computers to "think" or "reason" in ways similar to humans.
What are Large Language Models (LLMs)?
Large Language Models are a specific type of Artificial Intelligence, a subset of machine learning, that are trained on vast amounts of text data. Their primary function is to understand, generate, and process human language. When you interact with a chatbot that can write emails, summarize documents, or answer questions in a conversational style, you are likely interacting with a Large Language Model. They are called "large" because of the immense quantity of data they are trained on and the number of parameters (the internal variables that the model uses to make predictions) they contain.
What is Retrieval-Augmented Generation (RAG)? And Why is it Crucial for Trusted Answers?
While Large Language Models are powerful, they have a key limitation: they can sometimes "hallucinate" or generate incorrect information, especially if the topic is outside their training data or requires very specific, up-to-date facts. This is where Retrieval-Augmented Generation (RAG) comes in.
Retrieval-Augmented Generation is a technique that enhances Large Language Models by giving them access to an external, trusted knowledge base. Instead of relying solely on what they learned during training, a RAG system first retrieves relevant information from your specific data (like your company documents, databases, or curated knowledge bases), and then augments the Large Language Model's ability to generate an answer based on that retrieved, trusted data.
Why is RAG important for trusted answers? RAG is crucial because it allows the AI to:
- Access Proprietary Information: It can provide answers based on your unique, internal company data, not just general public knowledge.
- Reduce Hallucinations: By retrieving facts from a trusted source, the likelihood of the AI inventing information is significantly reduced.
- Provide Citations: Most importantly for this guide, RAG enables the system to show where it got the information from, providing direct references back to the original source documents or data "blocks" within your knowledge base. This is the foundation of trusted, verifiable AI answers.
The Significance of "Air-gapped" and "Local" Artificial Intelligence
When we talk about "Air-gapped" and "Local" Artificial Intelligence, we are referring to solutions that operate entirely within your controlled environment, typically on your own device or internal network, without needing a connection to external cloud servers or the public internet.
Why is this significant?
- Security and Data Sovereignty: Your sensitive data never leaves your device or your organization's premises. This is paramount for industries dealing with confidential, proprietary, or regulated information (e.g., government, finance, healthcare). You maintain complete control and ownership over your data.
- Privacy: It ensures that your conversations and the data you process are not used to train external models or stored in third-party clouds, protecting user privacy.
- Offline Access: The AI works even without an internet connection, making it ideal for field operations, secure facilities, or environments with unreliable connectivity.
- Cost-Effectiveness: It often eliminates recurring subscription fees, per-token charges, and other hidden costs associated with cloud-based AI, offering a perpetual license model.
AirgapAI embodies all these principles, delivering a robust, secure, and highly accurate AI experience right on your device.
AirgapAI: Your Trusted Offline Artificial Intelligence Assistant
Iternal Technologies' AirgapAI is a revolutionary Artificial Intelligence solution designed for businesses and individuals who demand superior accuracy, uncompromising security, and cost-effectiveness. It functions as a complete, self-contained Large Language Model platform that runs entirely on your personal computer, specifically optimized for Artificial Intelligence Personal Computers (AI PCs).
Key benefits of AirgapAI:
- 100% Local and Air-Gapped: All processing occurs on your device, ensuring your data never touches the cloud or external networks.
- Unparalleled Accuracy with Blockify: Our patented Blockify technology dramatically reduces AI hallucinations, boosting accuracy by up to 7,800% (78 times).
- Cost-Effective Ownership: Offered as a one-time perpetual license per device, AirgapAI is often 10 to 15 times less expensive than cloud-based alternatives like Microsoft Copilot or ChatGPT Enterprise, with no hidden fees.
- Offline Functionality: Work with Artificial Intelligence anytime, anywhere, even without an internet connection.
The Power of Blockify: Building Your Trusted Knowledge Base for Citations
The cornerstone of AirgapAI's trusted answers and accurate citations is our patented Blockify technology. Blockify is the ultimate data management solution, specifically designed to prepare your valuable corporate knowledge for optimal interaction with Large Language Models.
What Blockify Does
Imagine having thousands of documents—sales proposals, legal contracts, engineering specifications, customer support guides—all brimming with crucial information. Traditionally, feeding this raw data to an AI can lead to messy, inaccurate, or hallucinated responses. Blockify solves this by transforming your "documents" into highly structured, precise "blocks" of trusted information.
Here’s a detailed breakdown of the Blockify process:
- Data Ingestion: Blockify ingests vast quantities of diverse data sets. This includes common file formats such as text files, HyperText Markup Language documents, Portable Document Format files, Microsoft Word documents, Microsoft PowerPoint presentations, and even graphic files. For video content, the system can extract still frames or transcribe audio as needed, ensuring comprehensive data capture. 
- Condensing into Modular "Blocks": Blockify intelligently processes and condenses your ingested data into concise, modular "blocks." Each block is a self-contained unit of highly distilled, accurate information. This process can reduce the original data size by as much as 97.5% (down to 2.5% of the original content), making it incredibly efficient for Large Language Models to process. 
- Structured Information within Each Block: To ensure maximum clarity and utility for Artificial Intelligence, each block is structured with three critical components: - A Name: This is a descriptive title for the content within the block, often displayed in a distinct color (e.g., blue) within the AirgapAI interface to quickly identify the topic or source.
- A Critical Question: This is the key query or type of question that a user might ask, which this specific block is designed to answer. It acts as an optimized prompt for the Large Language Model.
- A Trusted Answer: This is the distilled, accurate, and approved response to the critical question. It’s carefully crafted to avoid redundancy, outdated information, or any content that could lead to AI hallucinations.
 
- Rich Metadata for Security and Data Governance: Each block is tagged with rich metadata. This includes critical attributes such as classification levels, access permissions, and other relevant details. This robust metadata framework is essential for supporting zero-trust environments, ensuring that only authorized users or personas can access specific sensitive information. This granular control is vital for maintaining data sovereignty and security within organizations. 
The Impact of Blockify: From Raw Data to Reliable Citations
The meticulous process of Blockify directly translates into the superior performance and trustworthiness of AirgapAI.
- Foundation for Reliable Citations: Because your data is broken down into discrete, named, and validated blocks, when AirgapAI generates an answer using Retrieval-Augmented Generation, it can directly point to the specific Blockify blocks that contributed to that answer. These blocks become your verifiable citations.
- Dramatic Accuracy Improvement: By refining data inputs into these high-quality, structured blocks, Blockify enables a remarkable improvement in the accuracy of your Large Language Models—up to 7,800%, or 78 times better than using raw, unstructured data. This dramatically reduces the risk of AI hallucinations.
- Human-in-the-Loop for Data Governance: After ingestion and block creation, these blocks are sent for a quick human review. This crucial step allows your team to update or approve messaging, flag outdated content (for example, from the year 2019), and ensure the entire knowledge base is current and trustworthy before it impacts Artificial Intelligence responses. This human oversight is key to maintaining data quality and integrity.
Blockify ensures that the answers you receive from AirgapAI are not just coherent, but also deeply rooted in your company's vetted and trustworthy information, complete with traceable citations.
Setting Up Your AirgapAI for Retrieval-Augmented Generation with Citations: A Step-by-Step Guide
To leverage AirgapAI's powerful Retrieval-Augmented Generation capabilities and generate cited answers, you first need to install and configure the application. This process is designed to be straightforward, even if you're new to Artificial Intelligence applications.
1. Downloading the Installer Package
The first step is to obtain the AirgapAI installer.
- Your Information Technology department will provide you with a secure internal or cloud link to download the latest ZIP archive. 
- Save this file to a readily accessible location on your computer, such as your "Downloads" folder or another folder where you have write permissions. - Example file: AirgapAI-v1.0.2-Install.zip
2. Installing the Application
Once you have the installer package, you can proceed with the installation.
- Extract the Files: Locate the downloaded ZIP file (e.g., AirgapAI-v1.0.2-Install.zip). Right-click on it and select "Extract All..." from the context menu.
- Choose Destination: A wizard will appear, prompting you to choose a destination folder for the extracted files. The default is typically a new folder created within your "Downloads" directory. Click "Extract."
- Launch Installer: Open the newly extracted folder. You will find an executable file named AirgapAI Chat Setup.exe. Double-click this file to begin the installation process.
- Follow the Wizard:- Accept License Agreement: Read and accept the license agreement.
- Create Desktop Shortcut: It's recommended to select the option to create a desktop shortcut for easy access.
- Install: Click "Install" to start the installation.
- Finish: Once the installation is complete, click "Finish."
 
- Operating System Security Prompts: If your operating system (such as Windows 11) presents any security prompts (e.g., SmartScreen, Gatekeeper), choose "Allow" or "Run anyway" to proceed with the installation. AirgapAI is a trusted application from Iternal Technologies.
3. First-Launch Onboarding Wizard
Upon successful installation, launch AirgapAI Chat using your new desktop shortcut or the Start menu entry. The very first time you launch the application, it will check for existing Large Language Models and, if none are found, will initiate an intuitive onboarding flow.
3.1. Profile & Chat Style
- Click "Start Onboarding."
- Display Name: Enter a display name for your user profile. The default is "You," but you can personalize it.
- Chat Style: Select your preferred chat style from the available options (e.g., Iternal Professional, Casual, Dark Mode, Retro). This customizes the application's appearance.
- Click "Next."
3.2. Uploading the Core Large Language Model
This step involves loading the primary Artificial Intelligence model that will power your chats.
- On the "Models" screen, expand the "Available Models" drop-down menu. Initially, it will be empty. 
- Click "Upload Model." 
- Browse to the - /models/subfolder located within the folder where you extracted the installer package.
- Choose a Model: Select a Large Language Model optimized for your computer's hardware specifications: - Llama-1B: Suitable for 2024 Integrated Graphics Processing Units (iGPUs) or low-power devices.
- Llama-3B: Recommended for Integrated Graphics Processing Units from 2025 or computers with a dedicated Graphics Processing Unit.
 
- Click "Save." The upload typically takes approximately 30 seconds. - Note: For system administrators managing the application, Large Language Models can also be added or updated by accessing the folder created after model upload, which is typically located within the %appdata% directory (e.g., C:\Users\John\AppData\Roaming\IternalModelRepo).
3.3. Uploading an Embeddings Model
An embeddings model is essential for the Retrieval-Augmented Generation process, as it helps the AI understand the semantic meaning of your data and queries.
- Still on the onboarding page, click "Upload Embeddings Model." 
- Open the - /models/subfolder from your extracted installer folder.
- Select - Jina-Embeddings.zip.
- Click "Save." This upload also takes around 30 seconds. - Note: System administrators can also add or update Embeddings Models by modifying the contents of the folder created within the %appdata% directory (e.g., C:\Users\John\AppData\Roaming\IternalModelRepo).
3.4. Adding Sample or Custom Datasets (Crucial for Retrieval-Augmented Generation and Citations)
Datasets are the fuel for Retrieval-Augmented Generation, allowing AirgapAI to provide contextually relevant and cited answers based on your specific information.
- Click "Upload Dataset." 
- Navigate to the - /datasets/subfolder within your installer folder.
- Select - CIA_World_Factbook_US.jsonl. This is a sample dataset to help you get started.
- Click "Save." - Note: For administrators updating datasets on the system, new updates can be pushed by modifying the contents of files saved within the %appdata% directory (e.g., C:\Users\John\AppData\Roaming\airgap-ai-chat\CorpusRepo).- Important Tip for Accuracy: While you can directly upload Microsoft Word documents, Portable Document Format files, or text files, for larger collections of documents and to achieve the optimal accuracy gain of approximately 78 times, it is highly recommended to convert your corpora using our Blockify technology. A local, on-device version of Blockify will be available in the beginning of the third quarter of 2025. 
3.5. Finish Onboarding
- Verify that all three essential items—the Core Large Language Model, the Embeddings Model, and at least one Dataset—are successfully added.
- Click "Continue." AirgapAI Chat will now boot with your selected configurations.
4. Initial Model Benchmarking
Upon the first launch of a model, AirgapAI Chat will offer to benchmark your hardware. This is a recommended step.
- Click "Run Benchmark" (recommended).
- Duration: This process takes approximately two minutes and measures critical performance metrics such as tokens per second and inference speed.
- Context Window: You have the option to skip the benchmark, but if you do, your context-size limits will remain at a conservative 2,000 tokens. Completing the benchmark allows you to expand the context window to larger sizes (e.g., up to 32,000 tokens) later in the settings. You can adjust the token context window after the benchmark by visiting "Settings" > "Chat" and dragging the slider to your desired size.
Congratulations! Your AirgapAI application is now installed and configured, ready to provide secure, accurate, and cited answers from your trusted data.
Leveraging Retrieval-Augmented Generation with Trusted Answers and Citations in AirgapAI Chat
With AirgapAI installed and your datasets loaded, you are now ready to engage in Retrieval-Augmented Generation (RAG) chats that provide trusted answers with verifiable citations. This is where the power of AirgapAI truly shines.
1. Accessing Your Datasets
For AirgapAI to provide answers from your specific knowledge base, you need to activate the relevant datasets.
- Toggle Datasets ON: On the AirgapAI chat interface, locate the sidebar (usually on the left). Here, you will see a list of your uploaded datasets.
- Select Your Dataset: To use a specific dataset, simply toggle it "ON." For this guide, ensure the "CIA World Factbook for USA" dataset (which you uploaded during onboarding) is selected. This tells the Artificial Intelligence to retrieve information from this particular source when generating responses.
2. Asking Questions and Receiving Cited Responses
Now, let's put Retrieval-Augmented Generation into action.
- Formulate Your Query: In the chat input box, ask a question that relates to the content of your selected dataset.- Example Prompt: "What are the major political parties in the United States?"
 
- Observe the RAG Engine: When you submit your question, AirgapAI's RAG engine will silently go to work. It will search through your activated dataset and identify the most relevant "IdeaBlocks" (the distilled, trusted blocks created by Blockify) that can answer your query.
- Large Language Model Synthesis and Citations: The Large Language Model then takes these retrieved IdeaBlocks and synthesizes a coherent, trusted answer. Crucially, AirgapAI will display citations, indicating which specific blocks or documents were used to formulate the response. This direct link back to your vetted sources is what makes AirgapAI's answers so trustworthy and verifiable.
3. Understanding Source Ranking and Block Linking
AirgapAI's ability to provide transparent, cited answers is built on its intelligent data retrieval and ranking system.
- Identifies and Ranks Top Blocks: For any given question, AirgapAI's RAG engine doesn't just pull random data. It intelligently identifies and ranks the top five (or more, depending on configuration) data blocks that are most relevant to answering your question. This ensures that the Artificial Intelligence focuses on the most pertinent information.
- Synthesizes Coherent, Trusted Answers: The Large Language Model then uses these highly-ranked blocks to synthesize a comprehensive and accurate response. The answer is not merely a copy-paste of the blocks but a generated summary that intelligently combines insights from the selected sources.
- The Transparency of "Showing Your Work": The display of these contributing blocks and their names (your citations) provides unparalleled transparency. This feature allows users, especially those in compliance-heavy or accuracy-critical roles, to instantly verify the source of any piece of information, thereby fostering confidence and trust in the AI's output.
4. Policy for Citation Usage and Caveats
While AirgapAI dramatically enhances the trustworthiness of AI, understanding its citation policy and limitations is important.
- Trust Through Traceability: The primary purpose of citations in AirgapAI is to provide trust through traceability. They confirm which specific internal, Blockified data sources were used to generate an answer. This is fundamentally different from a public-internet search engine that might cite external websites.
- Human Review for Accuracy: The accuracy of AirgapAI's cited responses heavily relies on the quality and human-verified nature of your Blockified datasets. The human-in-the-loop review process during Blockify creation (as discussed earlier) is critical. If the original blocks contain outdated or incorrect information, the citations will accurately point to those blocks, but the underlying data might still be flawed. Therefore, ongoing data governance is vital.
- Maintaining Security: Because all data processing and retrieval happen locally and air-gapped, citations within AirgapAI confirm the use of your own approved internal data. This maintains your data sovereignty and security, as no external sources are being accessed or referenced.
- Citations Confirm Internal Sources: When AirgapAI provides a citation, it is verifying that the information was retrieved from your curated corpus of Blockified data. This ensures that the answers align with your organization's official knowledge base and policies.
By understanding and utilizing these features, AirgapAI transforms your interaction with Artificial Intelligence from a leap of faith into a verifiable, trustworthy, and secure knowledge retrieval experience.
Advanced Workflows and Enhanced Trust with AirgapAI
Beyond standard Retrieval-Augmented Generation with citations, AirgapAI offers advanced features that further enhance the trustworthiness and utility of your Artificial Intelligence interactions.
Entourage Mode (Multi-Persona Chat)
AirgapAI's unique Entourage Mode allows you to interact with multiple Artificial Intelligence personas simultaneously, each drawing from potentially different, specialized datasets. This feature provides diverse perspectives and facilitates complex decision-making, similar to consulting a panel of experts.
- How it Works: In Entourage Mode, you configure several Artificial Intelligence personas (e.g., Marketing Specialist, Legal Advisor, Engineering Lead). Each persona can be linked to a specific set of Blockified datasets and trained with a particular style or expertise. When you ask a question, all activated personas generate responses from their respective knowledge bases.
- Enhancing Trust through Cross-Referencing: This multi-persona approach inherently enhances trust. By receiving distinct answers from different "experts" on the same topic, you can cross-reference information, identify nuances, and gain a more holistic understanding. It's like having internal "expert witnesses" provide their cited inputs.
- Example for High-Stakes Scenarios:- Defense/Intelligence: Configure one persona as a "CIA Analyst" with expertise in intelligence gathering and target package details. Configure another as a "Military Tactician" tuned for ground operations and combat strategies. You can then ask a complex question (e.g., "Assess the strategic implications of scenario X"), and receive distinct, cited answers from each persona, offering a multi-perspective view critical for high-stakes decision-making and scenario planning.
- Business: When preparing a complex proposal, your "Marketing" persona might focus on messaging, "Legal" on compliance, and "Technical Support" on feasibility, all leveraging their respective internal, Blockified datasets to provide cited insights.
 
Entourage Mode, combined with the underlying Blockify technology and citation capabilities, allows you to model complex decision-making processes, ensuring that even multi-faceted problems are approached with a foundation of trusted, traceable information from various expert viewpoints within your organization.
Deployment, Updates, and Support for Long-Term Trust
Establishing trust in your Artificial Intelligence solution isn't just about initial setup; it's also about ensuring consistent performance, ongoing security, and reliable support. Iternal Technologies has designed AirgapAI with enterprise-grade deployment, update, and support protocols.
1. Installation Protocol
- Seamless Integration: AirgapAI is delivered as a straightforward executable file, designed to integrate seamlessly into your organization's standard Windows imaging process. This means your Information Technology department can easily include AirgapAI in their existing deployment workflows, just like any other enterprise application.
- Detailed Guidance: Our comprehensive deployment manual provides detailed, step-by-step instructions on imaging, provisioning, and configuring role-specific access for your users.
- Coordinated Seed Deployments: For initial pilot or seed deployments, the Iternal team coordinates closely with your organization to ensure the application and all intended datasets (pre-packaged via Blockify) are pre-loaded and configured for immediate use.
2. Ongoing Updates and Maintenance
- Synchronized Update Cadence: AirgapAI's update cycle is designed to synchronize with your organization's typical operating system or enterprise software update schedule. This minimizes disruption and streamlines Information Technology management. 
- Centralized Deployment: Whether pushing new data sets, application updates, or critical security patches, your Information Technology team can deploy new versions and content through familiar image management solutions (e.g., Microsoft Intune or similar applications). This ensures all devices are consistently up-to-date and secure. - Note: You can change the file server update location. This is configured in the `updaterConfig.json` file located at `C:\Users\John\AppData\Local\Programs\AirgapAI Chat\resources\auto-updater\updaterConfig.json`. This file typically points to a cloud or internal server where updates are hosted.
3. Training and Support
- Comprehensive Training Options: We offer a 30-minute introductory demonstration to get you started, followed by personalized training sessions as an add-on service. Our online enablement page provides a wealth of resources, including step-by-step video tutorials, frequently asked questions, detailed user guides, and troubleshooting tips.
- Dedicated Customer Success: Our dedicated customer success team is readily available for follow-up calls, additional workshops, and ongoing assistance after the initial deployment, ensuring you maximize the value of AirgapAI.
This robust framework ensures that AirgapAI remains a trusted, high-performing, and easily manageable Artificial Intelligence solution throughout its lifecycle within your organization.
Conclusion: The Future of Trusted, Secure, and Cited Artificial Intelligence
In an era where Artificial Intelligence is rapidly transforming how we work, the ability to generate answers that are not only intelligent but also verifiable and trustworthy is paramount. AirgapAI, powered by Iternal Technologies, delivers this critical capability by combining cutting-edge Large Language Model technology with our patented Blockify data ingestion system and a commitment to 100% local, air-gapped security.
By guiding you through the process of setting up Retrieval-Augmented Generation with comprehensive citations, we've shown how AirgapAI empowers you to become the trusted expert. You can now confidently provide answers that come with "receipts," reducing AI hallucinations by up to 7,800% and ensuring every piece of information is traceable back to your organization's vetted knowledge base. This level of transparency is indispensable for analysts, support teams, legal professionals, and any role where accuracy and credibility are non-negotiable.
Embrace the future of secure, cost-effective, and transparent Artificial Intelligence. Gain the confidence that comes from knowing your AI works for you, on your terms, with your data, and always with the full backing of traceable sources.
Download the free trial of AirgapAI today at: https://iternal.ai/airgapai