How to Build a Zero-Trust Knowledge Base with Blockify and AirgapAI

How to Build a Zero-Trust Knowledge Base with Blockify and AirgapAI

Become the security-first knowledge leader: least privilege, maximum clarity. This comprehensive guide is for security, information technology, and knowledge management professionals aiming for controlled knowledge sharing. We will explore classification, tagging, role-based access, and immutable audit logs within AirgapAI, positioning it as a zero-trust aligned Retrieval-Augmented Generation solution on-device. This article includes a sample policy and a mapping guide.

The Era of Zero-Trust Artificial Intelligence: Why It Matters Now More Than Ever

In today's fast-paced digital landscape, Artificial Intelligence (AI) is no longer a luxury but a necessity for business efficiency and innovation. However, the promise of AI often comes with significant concerns: data security, privacy, reliability, and cost. Traditional cloud-based AI solutions, while powerful, inherently introduce risks related to data sovereignty, potential breaches, and the unpredictable nature of external networks. This is where the concept of "zero trust" becomes paramount for Artificial Intelligence, especially for organizations handling sensitive or proprietary information.

AirgapAI, developed by Iternal Technologies, stands as a revolutionary answer to these challenges. It’s a 100% local, on-device AI solution designed to provide unparalleled security, exceptional accuracy, and significant cost savings, all while operating completely offline if needed.

The Cloud Conundrum: Understanding the Risks of External Artificial Intelligence

When you leverage cloud-based AI services, your valuable, often sensitive, data must travel to and reside on external servers. This creates several vulnerabilities:

  • Data Sovereignty Concerns: Where is your data physically located? Does it cross international borders? These questions become critical for compliance with regulations like the General Data Protection Regulation (GDPR) or the Health Insurance Portability and Accountability Act (HIPAA).
  • External Network Breaches: Every time your data leaves your internal network, it becomes susceptible to interception or compromise. Cloud providers offer robust security, but the attack surface expands beyond your direct control.
  • Privacy Pitfalls: Many free or consumer-grade AI tools use your input data to train their models, effectively incorporating your proprietary information into their general knowledge base. This is a non-starter for any organization with confidential data.
  • Hidden Costs: Beyond per-user subscriptions, cloud AI solutions often incur additional charges for "tokens" or data usage, leading to unpredictable and escalating expenses.

The Hallucination Headache: When Artificial Intelligence Gets It Wrong

One of the most frustrating aspects of working with Large Language Models (LLMs) is the phenomenon of "hallucinations." This is when an AI generates plausible-sounding but factually incorrect or nonsensical information. For enterprise applications, where accuracy is paramount, an AI hallucination can erode trust, lead to misinformed decisions, and require extensive human verification, negating any efficiency gains.

On average, when an organization attempts to integrate its own data with a general-purpose AI, the hallucination rate can be as high as one in five queries – a staggering 20% error rate. If you cannot trust your Artificial Intelligence even once, you can never fully trust it again. This is a major stumbling block for widespread AI adoption in the enterprise.

The Cost Crisis: Breaking Free from Endless Subscriptions

Cloud-based AI solutions often come with a hefty price tag, typically ranging from $20 to $30 per user per month. Over a few years, this can amount to thousands of dollars per employee. AirgapAI offers a fundamental shift: a one-time perpetual license per device. This means you own your Artificial Intelligence, eliminating recurring subscription fees, hidden token charges, and unpredictable overage bills. AirgapAI runs at roughly one-tenth to one-fifteenth the cost of alternatives like Microsoft CoPilot or ChatGPT Enterprise, offering a massive return on investment from day one.

Introducing Zero-Trust Principles for Artificial Intelligence

The "zero trust" security model operates on the principle of "never trust, always verify." Applied to Artificial Intelligence, this means:

  • Explicit Verification: Every request for data, every access attempt, and every AI-generated output is explicitly verified before access is granted or information is presented.
  • Least Privilege Access: Users and AI models are granted only the minimum access privileges necessary to perform their assigned tasks.
  • Micro-segmentation: The knowledge base is segmented into granular units, with access controls applied at the most specific level possible.
  • Continuous Monitoring: All interactions with the AI and the data are continuously monitored and logged for anomalous behavior.

AirgapAI, through its local operation and integrated Blockify technology, is engineered from the ground up to embody these zero-trust principles, empowering organizations to build a truly secure and reliable knowledge base.

AirgapAI: Your On-Device, Secure Artificial Intelligence Assistant

AirgapAI is not just another chat application; it's a fully isolated, 100% local Artificial Intelligence solution designed to operate without any external network dependencies. Imagine the power of a sophisticated chat Large Language Model right on your desktop or laptop, processing your confidential data without it ever leaving your device.

What is AirgapAI?

AirgapAI is a locally installed, ChatGPT-like application that leverages open-source Large Language Models (LLMs). It runs entirely on the client device, such as a Dell AI Personal Computer, eliminating external network dependencies. This means your data remains sovereign, your privacy is protected, and your AI functions perfectly even in disconnected environments—be it a secure government facility, an airplane, or a remote field operation.

Key Differentiators:

  • Security: 100% local operation ensures data never leaves your device, providing an unparalleled level of privacy and protection against external breaches. This makes it a secure AI with offline mode and a privacy-first AI assistant.
  • Cost-effectiveness: With a one-time perpetual license and no hidden fees, AirgapAI is dramatically more affordable than cloud alternatives, making it a no subscription AI app and non-cloud AI for Personal Computer.
  • Accuracy: Powered by the patented Blockify technology, AirgapAI delivers up to 7,800% (78 times) more accurate responses, virtually eliminating hallucinations when interacting with your specialized data.
  • Offline Capabilities: Work anywhere, anytime, without an internet connection. AirgapAI is an offline AI alternative and an AI that works without internet.

System Requirements and Prerequisites

To ensure an optimal experience with AirgapAI, your device should meet or exceed the following specifications:

Component Minimum Recommended
Central Processing Unit (CPU) 8 Cores 8 Cores / 16 Threads or better
Random Access Memory (RAM) 16 Gigabytes 32 Gigabytes+
Disk 10 Gigabytes free (Solid State Drive) 50 Gigabytes Non-Volatile Memory Express
Graphics Processing Unit 4 Gigabytes+ Video Random Access Memory (2024 or Newer Integrated or Dedicated) 8 Gigabytes+ Video Random Access Memory (Dedicated)
Operating System (OS) Windows 11 Latest patches for Windows 11

Permissions: You will need security permissions to install applications on your device.

Installation: Getting AirgapAI Up and Running (Step-by-Step)

Setting up AirgapAI is designed to be straightforward, akin to installing any standard desktop application. No advanced technical skills or command-line interfaces are required.

1. Downloading the Installer Package

  • Obtain the latest ZIP archive from the internal or cloud link provided by your Information Technology (IT) department.
  • Save the file to your "Downloads" folder or any other writable location on your Personal Computer.
    Example file: AirgapAI-v1.0.2-Install.zip
    

2. Installing the Application

  • Locate the downloaded ZIP file. Right-click on it and select "Extract All..."
  • Choose a destination for the extracted files (the default is usually a new folder within your "Downloads" directory) and click "Extract."
  • Open the newly extracted folder.
  • Double-click on the installer executable, typically named AirgapAI Chat Setup.exe.
  • Follow the on-screen installer wizard:
    • Accept the license agreement.
    • Choose to create a desktop shortcut for easy access.
    • Click "Install."
    • Click "Finish" once the installation is complete.
  • If your Operating System's security features (such as SmartScreen or Gatekeeper) prompt you, select "Allow" or "Run anyway" to proceed with the installation.

3. First-Launch Onboarding Wizard

Upon launching AirgapAI Chat for the very first time (via your new desktop shortcut or Start-menu entry), the application will perform a quick check for existing Large Language Models. If none are found, the guided Onboarding flow will automatically begin.

3.1. Profile and Chat Style
  • Click "Start Onboarding."
  • Enter a display name. The default is "You," but you can personalize it.
  • Select your preferred "Chat Style." Options typically include "Iternal Professional," "Casual," "Dark Mode," or "Retro," allowing for customizable AI personalities.
  • Click "Next."
3.2. Uploading the Core Large Language Model
  • On the "Models" screen, expand the "Available Models" drop-down menu. It will initially be empty.

  • Click "Upload Model."

  • Browse to the /models/ folder located within the extracted installer folder.

  • Choose a Large Language Model suited to your hardware capabilities:

    • Llama-1B (ideal for 2024 integrated Graphics Processing Units or low-power devices).
    • Llama-3B (recommended for integrated Graphics Processing Units from 2025 or dedicated Graphics Processing Units).
  • Click "Save." The upload typically takes approximately 30 seconds.

    Note: Information Technology Administrators configuring the system can also add or update Chat Large Language Models by accessing the folder created after model upload, which is typically located within the %appdata% directory, for example: C:\Users\YourUsername\AppData\Roaming\IternalModelRepo
    
3.3. Uploading an Embeddings Model
  • Still on the onboarding page, click "Upload Embeddings Model."

  • Open the /models/ folder again and select Jina-Embeddings.zip.

  • Click "Save." This upload also takes approximately 30 seconds.

    Note: Information Technology Administrators configuring the system can also add or update Embeddings Models by accessing the folder created after model upload, which is typically located within the %appdata% directory, for example: C:\Users\YourUsername\AppData\Roaming\IternalModelRepo
    
3.4. Adding Sample or Custom Datasets

Datasets are crucial for enabling Retrieval-Augmented Generation (RAG) capabilities, allowing your Artificial Intelligence to answer questions based on your specific information.

  • Click "Upload Dataset."

  • Navigate to the /datasets/ folder from the install folder.

  • Select CIA_World_Factbook_US.jsonl as a sample.

  • Click "Save."

    Note: Information Technology Administrators updating the datasets loaded on the system can push new updates to the dataset by modifying the contents of the files saved within the %appdata% directory, for example: C:\Users\YourUsername\AppData\Roaming\airgap-ai-chat\CorpusRepo
    

    Tip: While you can directly upload Word, PDF, or TXT files for immediate summarization, for larger collections of documents or to achieve optimal accuracy, it is highly recommended to convert your corpus into Blockify format. Local, on-device Blockify capabilities will be available starting in Quarter 3 of 2025.

3.5. Finish Onboarding

Verify that all three items (Core Large Language Model, Embeddings Model, and at least one Dataset) are successfully added, then click "Continue." AirgapAI Chat will now boot with your chosen selections, ready for use.

4. Optional Setup Steps for Information Technology Teams

For Information Technology teams desiring enhanced functionality and integration, AirgapAI Chat offers optional setup steps.

Dell Technologies Dell Pro Artificial Intelligence Studio Support

AirgapAI Chat natively supports integration with Dell Technologies’ Dell Pro Artificial Intelligence Studio (DPAIS).

  1. As the Information Technology Systems Administrator, install the required files to enable a Large Language Model via DPAIS. Both Intel and Qualcomm are supported.
  2. After DPAIS services are running and you have validated that the local Large Language Model Application Programming Interface (API) endpoints can be called, open PowerShell and input the following command:
    [System.Environment]::SetEnvironmentVariable("DPAIS_ENDPOINT", "http://localhost:8553/v1/openai", "User")
    
  3. Relaunch the AirgapAI Chat application. The DPAIS Large Language Models available will automatically appear in the model selection menu within the settings page.

Initial Model Benchmarking

When a Large Language Model is launched for the first time, AirgapAI Chat will offer to Benchmark your hardware.

  • Click "Run Benchmark" (highly recommended).
  • This process takes approximately 2 minutes and measures crucial metrics like "tokens per second" and "inference speed." This helps you understand the performance of your local AI assistant.
  • You can choose to skip the benchmark, but in such cases, the context-size limits for your conversations will remain at a conservative 2,000 tokens. You can change the token context window after benchmarking by visiting "Settings" > "Chat" and adjusting the slider to your desired size, up to 32,000 tokens for optimal performance with larger documents.

Blockify: The Foundation of Your Zero-Trust Knowledge Base

The true power of AirgapAI, particularly for building a secure and highly accurate knowledge base, lies in its patented data management solution: Blockify. This technology is the bridge between your raw, often messy, enterprise data and the precise, trustworthy responses you expect from an Artificial Intelligence. Blockify ensures that your bring your own data AI operates with maximum effectiveness and security.

What is Blockify?

Blockify is the ultimate data management solution for Large Language Models (LLMs) at scale. It allows you to create questions and answers for a single source of truth structure, which is optimized for Large Language Models to answer your questions with precision. It fundamentally addresses the challenge of "messy enterprise data"—the vast repositories of documents, reports, presentations, and communications that are often outdated, redundant, or inconsistent.

Blockify transforms this chaotic data into a structured, highly accurate corpus of information, making your AI not just smart, but trustworthy. It is key to reducing AI hallucinations, ensuring that your AI for confidential chats provides reliable information.

The Blockify Process: From Documents to Trusted Blocks (Extreme Detail)

Think of Blockify as a highly intelligent content refinery, meticulously preparing your data for optimal Artificial Intelligence interaction. The process is designed to maximize accuracy while embedding zero-trust security from the ground up.

1. Ingestion of Large Data Sets

Blockify begins by ingesting vast quantities of your organizational data. This includes a wide array of formats:

  • Text documents (.txt)
  • Hypertext Markup Language (.html) files
  • Portable Document Format (.pdf) files
  • Microsoft Word documents (.docx)
  • Microsoft PowerPoint presentations (.pptx)
  • Even graphic files (where text can be extracted or metadata analyzed).
  • For video content, Blockify can extract still frames for analysis or transcribe audio to text as needed.

Examples of ingested data could include thousands of sales documents, detailed Request For Proposal (RFP) responses, internal policy manuals, legal contracts, or customer support knowledge bases.

2. Condensation into Concise, Modular Blocks

Once ingested, Blockify doesn't just store the data; it intelligently condenses and distills it. It identifies the core ideas and critical information within the documents, breaking them down into concise, modular "blocks" of data. This process can reduce the original data size by as much as 97.5% (down to 2.5% of the original content), making it incredibly efficient for Large Language Model processing.

Each Blockify block is structured to deliver maximum clarity and precision for Artificial Intelligence interaction:

  • A Name (Displayed in Blue): This provides a quick, human-readable identifier for the content topic of the block. For example, "Q3 Sales Performance" or "Employee Benefits Policy."
  • A Critical Question (Bold, Italicized): This is the key query that a user might ask related to the block's content. It represents the most direct way to extract the block's value. For example, "What are the key performance indicators for Q3 sales?"
  • A Trusted Answer (Light Gray): This is the distilled, accurate, and approved response to the Critical Question. This answer is carefully crafted to avoid redundancy, ambiguity, or outdated information, thus directly addressing the pitfalls of Artificial Intelligence hallucinations.

3. Rich Metadata Tagging for Zero-Trust Environments

A crucial element of Blockify's zero-trust alignment is its robust metadata tagging system. Each block is not just content; it's an intelligent, self-describing unit of information. The metadata includes:

  • Classification: Categorizing data based on its sensitivity (e.g., Public, Internal Use Only, Confidential, Classified).
  • Permissions: Defining which user roles or departments are authorized to access or interact with this specific block of data (e.g., "Finance Team," "Legal Department," "All Employees"). This supports role-based workflows.
  • Classification Levels: Aligning with organizational security hierarchies (e.g., Top Secret, Secret, Unclassified for government or defense applications).

This rich metadata enables the enforcement of zero-trust policies at the most granular level, ensuring that only authorized Artificial Intelligence models and user profiles can access specific pieces of information. This creates a truly secure AI for personal data and privacy protection.

4. Human-in-the-Loop Review for Continuous Accuracy

After ingestion and initial block creation, these blocks are routed for a quick human review. This "human-in-the-loop" step is vital for:

  • Updating Messaging: Ensuring that answers reflect the latest policies, product specifications, or strategic directions.
  • Approving Content: Validating the accuracy and appropriateness of the Trusted Answer.
  • Flagging Outdated Content: Identifying and removing or updating information that is no longer relevant (e.g., a "2019 Marketing Strategy" document that is no longer applicable).

This continuous review process is essential for maintaining the integrity and trustworthiness of your zero-trust knowledge base.

5. Outcome Metrics: Unprecedented Accuracy and Efficiency

The Blockify process delivers transformative results:

  • Data Size Reduction: As mentioned, original data can be reduced by up to 97.5%, making the knowledge base incredibly efficient to store and process on a local device.
  • Large Language Model Accuracy Improvement: Most remarkably, Blockify can improve the accuracy of your Large Language Models by an astonishing 7,800% (78 times). This effectively mitigates Artificial Intelligence hallucinations, allowing your teams to trust the Artificial Intelligence's responses with confidence.

Building a Zero-Trust Knowledge Base with Blockify

Leveraging Blockify within AirgapAI allows you to construct a knowledge base that is not only powerful but inherently secure and trustworthy.

1. Data Curation and Classification

For best results, we recommend that customer data be curated into relevant, logical categories. This could be specific product lines, business units, functional areas (e.g., Human Resources, Sales, Legal), or project-specific repositories. This initial organization greatly enhances the effectiveness of Blockify's hierarchical metadata and taxonomy framework.

2. Access Control and Permissions

The metadata tagging within each Blockify block enables fine-grained access control. When a user queries AirgapAI, the system, tied to the user's profile and permissions, only accesses blocks for which that user (and thus the Artificial Intelligence persona acting on their behalf) has authorization. This strictly enforces the "least privilege access" principle of zero trust.

3. Immutable Audit Logs (Implicit)

While not explicitly a "feature" in the conversational interface, the underlying Blockify process inherently supports auditability. Every change, approval, and version of a block is managed. This provides a clear, immutable record of how information has evolved and who has approved it, which is critical for compliance and maintaining data integrity in a zero-trust environment.

4. Policy and Mapping Guide (Conceptual Example)

To effectively implement a zero-trust knowledge base, an organization should establish clear policies. Here’s a conceptual example:

Sample Zero-Trust Knowledge Base Policy (Excerpt)

Objective: To ensure all Artificial Intelligence-driven knowledge retrieval adheres to the principle of least privilege, guaranteeing data confidentiality, integrity, and availability within Iternal Technologies' AirgapAI solution.

Scope: All proprietary and sensitive data ingested, processed, and utilized within AirgapAI's Blockify knowledge base.

Policy Principles:

  1. Default Deny: All access to Blockify-generated information is denied by default. Explicit authorization is required for any interaction.
  2. Granular Segmentation: Data will be categorized and tagged into Blockify blocks with specific metadata (classification, permissions, sensitivity levels).
  3. Role-Based Access Control: User roles will be defined with minimum necessary privileges, mapping directly to specific Blockify metadata tags.
  4. Human Verification: A human-in-the-loop process will be mandatory for initial Blockify ingestion review and periodic content validation.
  5. Auditability: All data modifications, access attempts (successful or denied), and Artificial Intelligence outputs will be logged for security review.

Data Classification and Access Mapping Guide (Example)

Data Classification Metadata Tag (Blockify) Authorized User Roles (AirgapAI Profile) Example Use Case
Public Public-Knowledge All Employees General company information, public product specifications
Internal Use Only Internal-Ops All Employees (Internal) Internal process guides, general Human Resources policies
Confidential Confidential-Finance Finance Team, Executive Leadership Budget reports, investment strategies
Classified Classified-R&D Research & Development Lead, Project Managers Proprietary research findings, unreleased product designs
Restricted Restricted-Legal Legal Department, Executive Leadership Active legal cases, compliance audit reports

This framework ensures that even if an Artificial Intelligence model were to operate outside its intended parameters, the underlying data access controls would prevent it from retrieving unauthorized information. This makes AirgapAI an ideal locked down AI app for sensitive organizational data.

AirgapAI in Action: Everyday Workflows with Your Secure Knowledge Base

Now that we understand the robust foundation of AirgapAI and Blockify, let's explore how to leverage this powerful combination in your daily workflows to achieve secure, accurate, and efficient Artificial Intelligence interactions. The user interface of AirgapAI Chat is designed for simplicity, making complex AI tasks accessible to anyone.

AirgapAI Chat User Interface Tour

Upon launching AirgapAI Chat, you'll find a clean, intuitive interface designed for seamless interaction. Key elements include:

  • Chat Window: The central area where you'll type your prompts and receive Artificial Intelligence responses.
  • Workflow Bar: Located below the new chat window, this bar provides quick access to pre-configured workflows for common tasks.
  • Sidebar: On the left, you'll find options for managing chats, models, and toggling datasets on or off.

Retrieval-Augmented Question Answering with Blockify Datasets

This is where your zero-trust knowledge base truly shines, enabling your private LLM to provide precise answers based on your curated, trusted data.

  1. Toggle Your Dataset ON: In the sidebar, you will see a list of your available datasets. Click the toggle switch next to the dataset you wish to query (e.g., "Iternal Technologies Enterprise Portfolio Overview" or the sample "CIA World Factbook for USA dataset"). The system will indicate when the dataset is active.

  2. Ask Your Question: In the chat window, pose a question relevant to the selected dataset.

    • Example using the CIA World Factbook: "What are the major political parties in the United States?"
    • Example using your internal corporate data (assuming "Iternal Technologies Enterprise Portfolio Overview" dataset is selected): "What is Iternal Technologies?" or "What is AirgapAI?"
  3. Receive Trusted Answers with Citations: The Retrieval-Augmented Generation (RAG) engine will first fetch the most relevant IdeaBlocks from your Blockify-powered dataset. Then, the Large Language Model will synthesize a coherent, trusted answer based only on that verified information, showing clear citations back to the source blocks. This provides trusted answers for AI technology.

    • For instance, if you ask "What is PC as a service?" with a relevant dataset, the system will identify and rank the top five data blocks that best answer the question and then synthesize a coherent, trusted response.

File Upload and Summarization

For quick analysis of individual documents, AirgapAI provides an easy file upload feature.

  1. Upload Your File: Drag a document file (such as a Portable Document Format (.pdf), Microsoft Word document (.docx), or plain text (.txt) file) directly onto the chat window, or click the paperclip icon (📎) to browse and select your file.
  2. Prompt for Summarization: Once the file is uploaded, enter a prompt.
    • Example: "Summarize this document in bullet points."
  3. Receive Instant Insights: AirgapAI will embed the document's content and provide an instant summary, directly on your local device.

Guided Demo Workflows

AirgapAI includes "Quick Start" workflows tailored for different roles and common business tasks, streamlining the Artificial Intelligence interaction process.

  1. Access the Workflow Bar: Locate the Workflow Bar below the new chat window.
  2. Select a Workflow: Click on a predefined workflow (e.g., "Sales Proposal – Cover Letter" or "Create a marketing outline for a new Dell AI PC").
  3. Provide a Prompt: Enter a minimal or robust prompt, depending on the workflow.
    • For "Sales Proposal – Cover Letter": You might upload supporting documents and then simply prompt "Write a cover letter."
    • For "Create a marketing outline": The system will query its general-purpose Large Language Model data, which is trained on the public Internet.
  4. Receive Engineered Output: The system delivers a fully-engineered output. You can click "Copy" (📋) to place the text on your clipboard.

Entourage Mode (Multi-Persona Chat)

"Entourage Mode" is a unique feature that allows users to interact with multiple AI personas simultaneously, providing diverse perspectives for complex decision-making and scenario planning. This is a powerful feature for customizable AI personalities and role-based workflows.

  1. Select an Entourage Mode Quick Start Workflow: From the new chat page, choose an Entourage Mode workflow.
  2. Configure Personas: In "Advanced Settings" → "Personas," you can define and configure various AI personas (e.g., Marketing, Sales, Engineering).
    • Defense/Intelligence Scenario: Configure one persona as a Central Intelligence Agency (CIA) analyst (expert in intelligence gathering, target package details, sensitive data interpretation) and another as a military tactician (tuned for insights on ground operations, combat strategies, tactical decision-making).
  3. Ask a Question: Pose a complex question to your "entourage."
    • Recommended prompt: "I am launching a new product called AirgapAI. It is a 100% local chat Large Language Model solution that is one-tenth the cost of other solutions with more capabilities. What do you think? Please answer in short sentences."
  4. Receive Multi-Perspective Responses: Responses from each persona will appear in a queue, with a persona activity indicator showing which Artificial Intelligence is "typing." This multi-persona approach supports high-stakes decision-making and scenario planning by combining diverse expert viewpoints, giving you a comprehensive understanding of complex issues.

Role-Based Workflows and User Profiles

AirgapAI is designed for enterprise deployment and multi-user environments.

  • The application is tied to the user's profile upon login. This means that multiple users on the same device can each leverage the application with their own isolated experiences and datasets.
  • This is configured per user profile through your standard image and provisioning process, ensuring that each user only accesses authorized data—a core principle of zero trust. This is part of the robust AI for privacy protection.

Multilingual Conversations

AirgapAI's Large Language Models can seamlessly switch between languages.

  • Prompt Example: "Tell me a short story in German about renewable energy."
  • The Large Language Model will generate the story in German. You can use the "Stop" button to halt generation at any time.

Advanced Configuration and Management for Information Technology Teams

AirgapAI is designed not only for end-user accessibility but also for robust management and customization by Information Technology (IT) departments. This section details features that allow IT to optimize, secure, and tailor the AirgapAI experience across an organization.

Context-Window Expansion

After completing the initial model benchmarking, IT administrators can adjust the maximum token limit for Large Language Model conversations.

  • Navigate to "Settings" → "Model Settings."
  • Drag the slider to set "Max Tokens" up to 32,000, allowing the Artificial Intelligence to process and maintain context for much longer and more complex documents or conversations.

Styling and Themes

AirgapAI offers flexibility in its visual presentation to align with corporate branding or user preference.

  • Go to "Settings" → "Appearance."
  • Switch between predefined themes (e.g., "Iternal Professional," "Dark Mode") or, for advanced users, build custom Cascading Style Sheets (CSS) to create entirely unique themes.

Workflow Templates

This feature is ideal for IT teams to pre-load company-specific tasks and standard prompts, ensuring consistent and efficient Artificial Intelligence usage.

  • Access "Settings" → "Workflows."
  • Add or edit prompt chains, creating guided sequences for common tasks like "Quarterly Report Summary" or "Legal Contract Review." These templates can enforce best practices for prompt engineering and data interaction.

In-App Benchmarking Suite

For IT teams testing new Large Language Models or hardware configurations, the built-in benchmarking suite is invaluable.

  • Navigate to "Settings" → "Benchmarking tab."
  • Run tests to measure new Large Language Models' performance in terms of tokens per second and inference speed, aiding in optimal model selection and deployment.

Model and Dataset Management

AirgapAI empowers IT with significant control over the deployed Artificial Intelligence models and datasets, supporting a local model support environment.

  • Bring Your Own Model (BYOM): Users can bring their own Large Language Models, or choose from a suite of pre-quantized, open-source models (e.g., Llama, Mistral, DeepSeek). If a needed model isn't pre-quantized, Iternal's engineering team can package and deploy it as a service. This ensures Large Language Model flexibility.
  • Updating Large Language Models: Information Technology administrators can update Chat Large Language Models by modifying the contents of the folder created after model upload, typically within the %appdata% directory (e.g., C:\Users\John\AppData\Roaming\IternalModelRepo).
  • Updating Datasets: As new documents are Blockified and datasets are updated, Information Technology can push these updated datasets to local devices. This is achieved through standard image management applications like Microsoft Intune or similar provisioning tools, ensuring a centralized, secure update mechanism. The contents of the dataset files can be modified within %appdata% (e.g., C:\Users\John\AppData\Roaming\airgap-ai-chat\CorpusRepo).

Updates and Maintenance

Maintaining a secure and up-to-date Artificial Intelligence environment is critical.

  • Update Cadence: Our update cadence is synchronized with your typical Operating System or enterprise software update cycle.
  • Deployment: Whether pushing data, application updates, or security patches, Information Technology can deploy new versions through familiar image management solutions, ensuring a seamless and secure update process across the fleet.
  • Updates are delivered by the built-in "Update Manager," which can be configured to use a "Local Server" or "Cloud" in "Settings" → "Updates." Information Technology can change the file server update location by modifying updaterConfig.json at C:\Users\John\AppData\Local\Programs\AirgapAI Chat\resources\auto-updater\updaterConfig.json. An example updaterConfig.json file might look like:
    {"win32-x64-prod":{"readme":"","update":"https://d30h3ho4go3k4y.cloudfront.net/releases/prod/public/chat-assistant/prod/public/1.0.2/AirgapAI Chat Setup 1.0.2.exe","install":"https://d30h3ho4go3k4y.cloudfront.net/releases/prod/public/chat-assistant/prod/public/1.0.2/AirgapAI Chat Setup 1.0.2.exe","version":"1.0.2"}}
    

Troubleshooting, Support, and Frequently Asked Questions

Even with the most intuitive software, questions and specific scenarios can arise. Iternal Technologies provides comprehensive support and resources to ensure a smooth experience with AirgapAI.

Common Questions

Here are some common questions and our suggested answers:

  • "Has AirgapAI been granted an Authority To Operate (ATO)?"

    • Answer: "We are actively working with U.S. Air Force specialists who are evaluating AirgapAI through their Authority To Operate process."
  • "What file formats does Blockify support for data ingestion?"

    • Answer: "Our system natively ingests text, Hypertext Markup Language, Portable Document Format, Microsoft Word, Microsoft PowerPoint, and graphic files. For video content, we extract still frames or transcribe audio as needed. For best results, we recommend customer data be curated into relevant categories (such as specific product lines or business units) to take full advantage of our hierarchical metadata and taxonomy framework. As new documents are Blockified, the datasets can be updated, and these updated datasets can be pushed to the local devices via Microsoft Intune or similar image management applications."
  • "How do we support multiple users on a single network or device?"

    • Answer: "AirgapAI runs directly on each client device, integrated into your standard image-provisioning process. For secure multi-user environments, Information Technology can configure the image so each user accesses personalized, role-specific datasets stored within their user folder, ensuring individual privacy and data isolation."
  • "Can customers bring their own Large Language Models or fine-tune open-source ones?"

    • Answer: "Absolutely. AirgapAI is designed with flexibility in mind. Customers can bring their own models or choose from a suite of pre-quantized, open-source models (e.g., Llama, Mistral, DeepSeek). If a needed model isn’t pre-quantized, our engineering team can package and deploy it as a service."

Installation Protocol

  • Deployment: "AirgapAI is delivered as an executable file that integrates straightforwardly into your standard Windows imaging process. Our deployment manual provides detailed instructions on imaging, provisioning, and role-specific configuration."
  • Seed Deployments: "For initial seed deployments or pilots, the process is coordinated with the Iternal Technologies team, ensuring the application and all intended datasets (pre-packaged via Blockify) are pre-loaded for a rapid start."

Ongoing Updates and Maintenance

  • Update Cadence: "Our update cadence is synchronized with your typical Operating System or enterprise software update cycle. Whether pushing data or security patches, Information Technology can deploy new versions through familiar image management solutions."
  • Training and Support: "We offer a 30-minute introductory demonstration followed by personalized training sessions as an add-on service. Our online enablement page includes step-by-step videos, Frequently Asked Questions, user guides, and troubleshooting tips. Our customer success team is also available for follow-up calls and additional workshops after initial deployment."

The Result of the Result: Becoming a Security-First Knowledge Leader

You're not just implementing an Artificial Intelligence application; you're transforming your organization into a security-first knowledge leader. With AirgapAI and Blockify, you move beyond merely using Artificial Intelligence to embodying the identity of an organization that is:

  • Undeniably Secure: Your data remains within your premises, protected by hardware-level and software-level zero-trust principles. You gain secure AI with end-to-end privacy and become an advocate for AI for data privacy. You are the company that can confidently declare: "Our private AI never leaves our network."
  • Impeccably Accurate: Artificial Intelligence hallucinations, once a major deterrent, are virtually eliminated, thanks to Blockify's 7,800% accuracy improvement. You are the organization that trusts its Artificial Intelligence because it's built on a foundation of verified, pristine data.
  • Exceptionally Efficient: Achieve fast Artificial Intelligence wins across multiple departments in days, not months, while drastically reducing costs. Your teams operate with increased productivity, empowered by an offline secure AI assistant that delivers immediate value.
  • In Total Control: You own your Artificial Intelligence. You dictate its terms, manage its data, and control its evolution, free from the dictates and unpredictable costs of cloud subscriptions. This is the ultimate one device AI license, empowering you to build your own AI assistant.

AirgapAI provides a swift Artificial Intelligence win, robust cost savings, and unparalleled data security, virtually eliminating all Artificial Intelligence hallucinations—all of which are critical in today’s challenging market. Our patented Blockify technology improves Large Language Model accuracy by 78 times. You become the organization that embraces cutting-edge Artificial Intelligence without compromise, setting a new standard for secure and intelligent operations.

Conclusion and Next Steps

AirgapAI by Iternal Technologies offers a paradigm shift in how organizations can leverage Artificial Intelligence. It provides a secure, accurate, and cost-effective pathway to empower your workforce, safeguard your data, and unlock unprecedented productivity, all from your AI Personal Computer. Take control of your Artificial Intelligence journey and become a leader in the zero-trust knowledge economy.

Download the free trial of AirgapAI today at: https://iternal.ai/airgapai

Free Trial

Download for your PC

Experience our 100% Local and Secure AI-powered chat application on your Windows PC

✓ 100% Local and Secure ✓ Windows 10/11 Support ✓ Requires GPU or Intel Ultra CPU
Start AirgapAI Free Trial
Free Trial

Try AirgapAI Free

Experience our secure, offline AI assistant that delivers 78X better accuracy at 1/10th the cost of cloud alternatives.

Start Your Free Trial