How to Bring Your Own Model (BYOM) into AirgapAI: Unlock Custom Artificial Intelligence Without Compromising Privacy or Cost Control

How to Bring Your Own Model (BYOM) into AirgapAI: Unlock Custom Artificial Intelligence Without Compromising Privacy or Cost Control

Become the innovator who confidently says "yes" to the highly specialized models your team loves, custom-tailored for any task—without ever surrendering privacy, control, or incurring exorbitant cloud costs. With AirgapAI, you're not just deploying Artificial Intelligence; you're becoming the architect of an agile, secure, and infinitely adaptable intelligence within your organization.

This comprehensive guide is crafted for Artificial Intelligence leads, data scientists, and Information Technology professionals seeking unparalleled model flexibility within enterprise guardrails. We'll delve into the essential topic of integrating your own Large Language Models, covering everything from supported model formats and precise upload steps to critical repository locations, embeddings model alignment, and robust data governance rules. AirgapAI empowers you with the freedom to experiment locally, ensuring your unique models enhance productivity securely and cost-effectively.

1. Introduction: Unlocking the Power of Your Own Artificial Intelligence Models with AirgapAI

In today's fast-evolving technological landscape, Artificial Intelligence is no longer a luxury but a necessity for business agility and innovation. However, many organizations face a critical dilemma: how to leverage powerful Artificial Intelligence models without compromising data security, incurring prohibitive costs, or relying on generic cloud-based solutions that may not fit their unique needs. This is where AirgapAI and the concept of Bring Your Own Model (BYOM) fundamentally change the game.

What is "Bring Your Own Model" (BYOM)?

At its core, Bring Your Own Model refers to the ability to integrate and run custom or open-source Artificial Intelligence models within a specific application or platform. Instead of being limited to the models pre-selected by a vendor, BYOM empowers you to choose, fine-tune, or even create your own Large Language Models (LLMs) and seamlessly deploy them. This means you can tailor your Artificial Intelligence to understand your company's proprietary jargon, industry-specific data, and unique business challenges with unparalleled precision.

Why BYOM is Crucial for Modern Enterprises

For modern enterprises, BYOM isn't just about flexibility; it addresses several critical pain points:

  • Privacy and Data Sovereignty: Cloud-based Artificial Intelligence solutions often require your sensitive data to leave your secure perimeter, raising concerns about data leaks, compliance, and intellectual property. BYOM, especially with AirgapAI, keeps your data and models entirely local.
  • Customization and Specialization: Generic models, while powerful, lack the nuanced understanding required for specialized tasks. BYOM allows you to deploy models trained on your specific datasets, leading to highly accurate and relevant responses for niche applications.
  • Cost Control: Cloud Artificial Intelligence services often come with unpredictable subscription fees, token charges, and overage bills that can quickly escalate. BYOM with a local solution like AirgapAI offers a more transparent, predictable, and significantly lower cost model.

How AirgapAI Revolutionizes BYOM

AirgapAI, made by Iternal Technologies, is designed from the ground up to support a robust BYOM strategy. It is a 100% local chat-based Large Language Model solution that runs entirely on your client device, such as an Artificial Intelligence Personal Computer (AI PC). This eliminates any external network dependencies for inference, ensuring your data never leaves your device. By combining BYOM flexibility with AirgapAI's inherent security and cost efficiency, you gain a powerful, tailored, and private Artificial Intelligence assistant that truly works for you.

2. Understanding the AirgapAI Advantage: Why Local, Private Artificial Intelligence Matters

Before diving into the mechanics of Bring Your Own Model, it's essential to grasp the foundational benefits that AirgapAI provides. These advantages are what make AirgapAI an ideal platform for deploying your custom Large Language Models in a secure, efficient, and cost-effective manner.

Data Sovereignty and Security

The paramount concern for many organizations today is the security and control of their data. AirgapAI addresses this head-on:

  • 100% Local Operation: AirgapAI runs entirely on the client device. This means your data, your queries, and the Artificial Intelligence's responses never leave your physical machine. There is no "in" or "out" network connection for Artificial Intelligence inference, providing a true air-gapped solution.
  • No Cloud Dependency: Unlike cloud-based Large Language Models, AirgapAI does not send your information to external servers for processing. This virtually eliminates the risk of data breaches, unauthorized access, or compliance violations that come with third-party cloud solutions.
  • Ideal for Regulated Industries: For sectors like government, defense, financial services, and healthcare, where data privacy and compliance (e.g., Health Insurance Portability and Accountability Act - HIPAA) are non-negotiable, AirgapAI provides the robust security framework required. Your existing security policies remain effective, as data stays within your controlled environment.

Cost Efficiency

Cost is a significant barrier to Artificial Intelligence adoption at scale. AirgapAI offers a compelling alternative to expensive subscription models:

  • One-Time Perpetual License: AirgapAI is sold as a one-time perpetual license per device. This means you buy it once and own it, without the burden of managing yet another annual subscription in your Information Technology budget.
  • No Recurring Subscription Fees: Forget the standard monthly or annual per-user fees common with cloud Artificial Intelligence solutions.
  • No Hidden Token Charges or Overage Bills: Cloud-based Large Language Models often surprise users with additional costs based on usage (tokens). AirgapAI eliminates these unpredictable expenses.
  • Significantly Cheaper than Cloud Alternatives: On average, AirgapAI runs at roughly one-tenth to one-fifteenth the cost of alternatives like Microsoft CoPilot and ChatGPT Enterprise, offering substantial savings while delivering high Return on Investment.

Elimination of Artificial Intelligence Hallucinations

One of the biggest obstacles to trusting Artificial Intelligence is the phenomenon of "hallucinations"—when the model generates incorrect, misleading, or entirely fabricated information. AirgapAI tackles this with patented technology:

  • Blockify Technology: Iternal Technologies' patented Blockify technology refines data inputs for highly accurate responses. It ingests large datasets, condenses them into concise, modular "blocks" of trusted data, each with a critical question and a trusted answer.
  • 7,800% Accuracy Improvement: This process can reduce original data size by as much as 97.5% and, remarkably, improve the accuracy of your Large Language Models by 7,800% (or 78 times). This level of precision is critical for building confidence and trust in Artificial Intelligence outputs.

Offline Access

Productivity shouldn't be limited by network availability:

  • Artificial Intelligence Capabilities Anywhere: AirgapAI works locally on your device, meaning you can access powerful Artificial Intelligence-powered tools even when you are offline or away from a Wi-Fi connection. This is invaluable for field personnel, travelers, or those in secure, disconnected environments (e.g., on a military mission, on a submarine, in a remote manufacturing facility).

Hardware Agnostic

AirgapAI is designed for broad compatibility and optimal performance across various hardware configurations:

  • Utilizes All Compute Resources: AirgapAI can intelligently run on the Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Neural Processing Unit (NPU) of your device. This ensures maximum performance regardless of your hardware's specifications.
  • Seamless Integration: It's a one-click executable application that integrates seamlessly into standard Windows imaging workflows. No command line setup, no custom configurations—just like opening Microsoft Word or Excel. AirgapAI is supported across Intel, Advanced Micro Devices (AMD), NVIDIA, and Qualcomm hardware.

These core advantages provide a secure, cost-effective, and highly accurate foundation upon which you can confidently Bring Your Own Model to AirgapAI.

3. Preparing Your Custom Large Language Model for AirgapAI

Successfully integrating your own Large Language Models (LLMs) into AirgapAI requires a little preparation. This section will guide you through understanding model compatibility, optimization, and the role of embeddings.

Supported Model Formats

AirgapAI is designed to be highly flexible and supports a wide range of open-source Large Language Models.

  • Common Open-Source Models: You can leverage popular open-source LLMs such as Llama (provided with the application), Mistral, DeepSeek, and many others. The ecosystem of open-source models is vast and continuously growing, offering diverse capabilities for different tasks.
  • Bring Your Own Model (BYOM): The true power lies in the ability to bring models you've either fine-tuned yourself or sourced from specialized repositories. This allows for deep customization to suit your specific business needs or proprietary data.
  • Pre-Quantized Models: For optimal performance on client devices, many models are "pre-quantized." This leads us to our next important concept.

Quantization (Simplified Explanation)

You might wonder why models are often referred to with terms like "quantized" or why AirgapAI provides specific versions like "Llama-1B" or "Llama-3B."

  • What is Quantization? In simple terms, quantization is a technique used to reduce the size and computational requirements of a Large Language Model without significantly impacting its performance. Imagine taking a very detailed, high-resolution photograph and compressing it into a smaller file size while trying to retain as much visual quality as possible.
  • Why is it Necessary for Local Devices? Full-precision LLMs can be extremely large (many gigabytes) and computationally intensive, requiring powerful data center GPUs. Quantization makes these models much smaller (e.g., 2-4 gigabytes for AirgapAI models) and more efficient, allowing them to run effectively on client devices like Artificial Intelligence Personal Computers (AI PCs) with integrated or dedicated Graphics Processing Units (GPUs) or even Central Processing Units (CPUs). This optimization is crucial for achieving smooth, local inference.

Choosing the Right Model Size

The performance of your Large Language Model within AirgapAI will depend on your device's specifications, particularly its Graphics Processing Unit Video Random Access Memory (GPU VRAM) and Random Access Memory (RAM).

  • Llama-1B (1 Billion Parameters): This is a compact model suitable for entry-level Artificial Intelligence Personal Computers or devices with integrated Graphics Processing Units from 2024 or older, or low-power configurations. It offers a good balance of capability and efficiency.
  • Llama-3B (3 Billion Parameters): This model offers enhanced capabilities and is recommended for Artificial Intelligence Personal Computers with integrated Graphics Processing Units from 2025 or newer, or devices equipped with dedicated Graphics Processing Units. It requires more resources but delivers a more powerful Artificial Intelligence experience.
  • General Guidance:
    • Minimum GPU VRAM: 4 Gigabytes (GB) for entry-level models.
    • Recommended GPU VRAM: 8 Gigabytes (GB) or more for larger, more capable models.
    • Minimum RAM: 16 Gigabytes (GB).
    • Recommended RAM: 32 Gigabytes (GB) or more.
    • Disk Space: Ensure at least 10 Gigabytes (GB) of free Solid State Drive (SSD) space, with 50 Gigabytes (GB) NVMe recommended for optimal performance and future model additions.

Importance of Embeddings Models

When we talk about "bringing your own data" to an Artificial Intelligence, especially for Retrieval-Augmented Generation (RAG), embeddings models play a crucial role.

  • What are Embeddings Models? An embeddings model transforms human-readable text (like sentences or paragraphs from your documents) into numerical vectors, which are lists of numbers. These vectors represent the semantic meaning of the text. Text that is semantically similar will have vectors that are numerically "close" to each other in this multi-dimensional space.
  • Why are they Necessary for RAG and Blockify?
    1. Semantic Search: When you ask AirgapAI a question, your query is also converted into an embedding. The system then rapidly searches through the embeddings of your Blockify datasets to find the most semantically relevant "blocks" of information, even if the exact keywords aren't present.
    2. Retrieval-Augmented Generation: These retrieved, relevant blocks are then provided as context to your Large Language Model. This dramatically improves the accuracy and relevance of the LLM's answer by grounding it in your trusted, proprietary data, rather than relying solely on its general training knowledge.
    3. Blockify's Role: Blockify specifically optimizes the creation of these "idea blocks" and their embeddings, ensuring that your data is structured in the most effective way for the embeddings model to represent and for the Large Language Model to interpret.

By understanding these aspects, you'll be well-prepared to select and integrate your custom Large Language Models and their accompanying embeddings models into AirgapAI for a truly personalized and powerful Artificial Intelligence experience.

4. Step-by-Step Guide: Integrating Your Own Large Language Model into AirgapAI

This section provides a highly detailed, step-by-step walkthrough to integrate your custom Large Language Models (LLMs) and datasets into AirgapAI. Follow these instructions carefully, even if you have no prior Artificial Intelligence experience.

Prerequisites

Before you begin, ensure the following:

  1. AirgapAI Installation: AirgapAI Chat must be successfully installed on your system. If you haven't installed it yet, please refer to the AirgapAI Chat Install Guide or the video tutorial at https://airgapai.app/install-guide.
  2. System Requirements: Confirm your device meets the minimum or recommended system requirements (CPU: 8 Cores, RAM: 16 GB, Disk: 10 GB free SSD, GPU VRAM: 4 GB+, OS: Windows 11).
  3. Installer Package: You should have the extracted installer folder available, as it contains sample models and datasets. For example, AirgapAI-v1.0.2-Install.zip extracted to a folder like Downloads/AirgapAI-v1.0.2-Install/.
  4. Security Permissions: Ensure you have the necessary security permissions to install applications and modify user application data folders on your Windows machine.

Accessing the Onboarding Wizard or Settings

Upon the first launch of AirgapAI Chat, if no models are found, the Onboarding Wizard will automatically begin. If you have already completed the onboarding or are updating models, you can access model management through the application's Settings menu. For first-time setup, proceed with the wizard as described below.

4.1 Profile & Chat Style (First-Time Onboarding)

  1. Launch AirgapAI Chat: Double-click the desktop shortcut or select it from your Start menu.
  2. Start Onboarding: When prompted, click the "Start Onboarding" button.
  3. Enter Display Name: Type in a display name. The default is "You."
  4. Pick Chat Style: Choose your preferred chat style (e.g., Iternal Professional, Casual, Dark Mode, Retro). This customizes the application's appearance.
  5. Click Next.

4.2 Uploading Your Core Large Language Model

This is where you integrate the primary Artificial Intelligence model that will process your queries.

  1. Navigate to Models Screen: You will be on the "Models" screen within the Onboarding Wizard. (If in Settings, look for "Model Settings").
  2. Expand Available Models: Initially, the "Available Models" drop-down menu will be empty.
  3. Click Upload Model: Locate and click the "Upload Model" button. A file browser window will appear.
  4. Browse to Models Folder: Navigate to the /models/ subfolder within your extracted AirgapAI installer directory (e.g., Downloads/AirgapAI-v1.0.2-Install/models/).
  5. Choose a Model: Select a model suited to your hardware.
    • Llama-1B: Ideal for 2024 Integrated Graphics Processing Unit (iGPU) or low-power devices.
    • Llama-3B: Recommended for iGPUs from 2025 or newer, or systems with dedicated Graphics Processing Units.
  6. Click Save: The model will now begin uploading and installing. This process typically takes approximately 30 seconds.
    • Note for Administrators: Large Language Models are stored in a specific application data folder. You can add or update Chat LLMs by accessing the folder created after model upload, which is typically located at: C:\Users\[Your Username]\AppData\Roaming\IternalModelRepo (Replace [Your Username] with your actual Windows username).

4.3 Uploading Your Embeddings Model

An embeddings model is crucial for enabling Retrieval-Augmented Generation (RAG) and leveraging your custom datasets effectively.

  1. Still on Onboarding Page: Ensure you are still on the "Models" screen of the Onboarding Wizard. (If in Settings, you'll find a separate section for Embeddings Models).
  2. Click Upload Embeddings Model: Locate and click this button. A file browser will appear.
  3. Browse and Select Embeddings Model: Open the /models/ folder from your install directory again, and select the Jina-Embeddings.zip file.
  4. Click Save: The embeddings model will begin uploading. This also takes approximately 30 seconds.
    • Note for Administrators: Similar to LLMs, Embeddings Models are stored in the IternalModelRepo folder: C:\Users\[Your Username]\AppData\Roaming\IternalModelRepo

4.4 Adding Sample or Custom Datasets

Datasets are the fuel for Retrieval-Augmented Generation (RAG), allowing your Artificial Intelligence to chat with your own private information.

  1. Click Upload Dataset: On the current onboarding page, click the "Upload Dataset" button.
  2. Navigate to Datasets Folder: Go to the /datasets/ folder within your extracted installer directory (e.g., Downloads/AirgapAI-v1.0.2-Install/datasets/).
  3. Select Sample Dataset: Choose CIA_World_Factbook_US.jsonl for a sample.
    • Tip: While you can directly upload Word, PDF, or TXT files, for larger collections of documents, it is highly recommended to convert them to Blockify datasets. This process, which will be available locally on-device starting Quarter 3 2025, yields an approximately 78-times (78x) accuracy gain.
  4. Click Save: The dataset will be uploaded.
    • Note for Administrators: Datasets are stored and updated in the CorpusRepo folder: C:\Users\[Your Username]\AppData\Roaming\airgap-ai-chat\CorpusRepo Information Technology (IT) can push new updates to datasets by modifying the contents of files saved within this location.

4.5 Finish Onboarding

  1. Verify Added Items: Confirm that you have successfully uploaded at least one core Large Language Model, one embeddings model, and one dataset.
  2. Click Continue: AirgapAI Chat will now boot up and be ready to use with your selected models and data.

4.6 Optional Setup Steps: Dell Technologies Dell Pro Artificial Intelligence Studio Support

For Information Technology teams desiring deeper integration and management, AirgapAI Chat supports native integration with Dell Technologies’ Dell Pro Artificial Intelligence Studio (DPAIS).

  1. Install DPAIS Files: As the Information Technology System's administrator, install the required files to enable an Large Language Model via DPAIS. Both Intel and Qualcomm Central Processing Units are supported.
  2. Validate Local LLM Application Programming Interface (API) Endpoints: Ensure that DPAIS services are running and you have validated that the local Large Language Model API endpoints can be called successfully.
  3. Set Environment Variable: Open PowerShell (search for "PowerShell" in your Windows Start menu, right-click, and select "Run as administrator"). Input the following command and press Enter:
    [System.Environment]::SetEnvironmentVariable("DPAIS_ENDPOINT", "http://localhost:8553/v1/openai", "User")
    
  4. Relaunch AirgapAI Chat: Close and then reopen the AirgapAI Chat application. The DPAIS Large Language Models available through your Dell Pro Artificial Intelligence Studio setup will automatically appear in the model selection menu within AirgapAI's settings page.

Congratulations! You have now successfully integrated your own Large Language Models and datasets into AirgapAI, paving the way for a customized, secure, and highly accurate Artificial Intelligence experience.

5. Initial Model Benchmarking

Upon the first launch of a newly added or selected Large Language Model, AirgapAI Chat will offer to Benchmark your hardware. This is a crucial step for optimizing performance.

  • Click Run Benchmark (Recommended): It is highly recommended to click this button.
  • Duration: The benchmarking process typically takes approximately two minutes. During this time, AirgapAI measures key performance indicators such as "tokens per second" and "inference speed" specific to your device and the selected model.
  • Context-Size Limits: If you choose to skip the benchmark, the Artificial Intelligence's context-size limits (how much information it can process in a single conversation turn) will remain at a conservative two thousand tokens. Completing the benchmark allows the application to dynamically adjust and potentially expand this limit for a richer conversational experience.
  • Changing Context Window: After the benchmark is complete, you can manually adjust the token context window by visiting "Settings > Chat" and dragging the slider to your desired size, up to thirty-two thousand tokens, depending on your hardware's capabilities.

6. Managing Your Models and Datasets in AirgapAI

Once your models and datasets are integrated, AirgapAI provides intuitive tools for managing them, ensuring your Artificial Intelligence remains adaptable and up-to-date.

Model Selection

AirgapAI allows you to easily switch between different Large Language Models you have uploaded.

  1. Access Settings: Navigate to the "Settings" menu within the AirgapAI application.
  2. Model Settings: Look for the "Model Settings" or similar section where your uploaded Large Language Models are listed.
  3. Select Desired Model: From the drop-down menu or list, choose the Large Language Model you wish to use for your current conversations. This allows you to experiment with different models or use a specialized one for a particular task.

Dataset Management

Datasets are key to Retrieval-Augmented Generation (RAG) and chatting with your own data.

  1. Toggle Datasets On/Off: In the AirgapAI chat interface, typically on a sidebar, you will find a list of your uploaded datasets. You can toggle these datasets "ON" or "OFF" to include or exclude their content from the Artificial Intelligence's knowledge base for your current query. For example, you might toggle on the "CIA World Factbook for USA" dataset to ask specific geographical or political questions.
  2. Adding New Datasets: You can upload new datasets at any time through the "Settings" menu, similar to the initial onboarding process.

Updating Models and Datasets

For enterprise deployments, Information Technology (IT) teams need efficient ways to manage updates.

  • Centralized Deployment: The AirgapAI application is designed to integrate into your standard Information Technology imaging workflows. This means that Large Language Model updates and dataset updates can be centrally managed and pushed to individual client devices.
  • Image Management Applications: Information Technology can deploy new versions or updated datasets using familiar image management solutions like Microsoft Intune or similar enterprise tools. As new documents are processed and optimized with Blockify, these updated datasets can be seamlessly pushed to the local devices. This ensures all users have access to the most current and accurate information.

Context-Window Expansion

After the initial model benchmarking, you gain more control over your Artificial Intelligence's conversational depth.

  • Adjusting Max Tokens: Go to "Settings > Model Settings" (or similar) within the AirgapAI application. Here, you will find a slider or input field for "Max Tokens." You can drag this slider to increase the maximum number of tokens (words or word fragments) the Large Language Model can consider in a single conversation.
  • Benefits: Expanding the context window allows the Artificial Intelligence to "remember" more of the ongoing conversation and process longer documents or more complex queries, leading to more coherent and comprehensive responses. You can typically set this up to thirty-two thousand tokens, depending on your device's capabilities.

By effectively managing your models and datasets, you ensure that your AirgapAI deployment remains agile, relevant, and continually optimized for your organization's evolving needs.

7. Real-World Application: Leveraging Your Custom Models with AirgapAI Workflows

Integrating your custom Large Language Models (LLMs) and datasets with AirgapAI unlocks a suite of powerful, tailored workflows. This section explores how to put your personalized Artificial Intelligence to work.

Retrieval-Augmented Question and Answer (QA) with Blockify Datasets

This is the core of how AirgapAI leverages your proprietary data for ultra-accurate responses.

  1. Activate Your Dataset: In the AirgapAI chat interface, ensure your custom Blockify dataset (e.g., "Iternal Technologies Enterprise Portfolio Overview" or your own uploaded company knowledge base) is toggled "ON" in the sidebar.
  2. Pose Your Question: Ask a question that your dataset is designed to answer. For instance:
    • "What is Iternal Technologies?"
    • "What is AirgapAI?"
    • "What are the major political parties in the United States?" (if using the sample CIA World Factbook dataset).
  3. Observe the Process: The Retrieval-Augmented Generation (RAG) engine, powered by your embeddings model, will:
    • Identify and rank the most relevant "idea blocks" from your Blockify dataset that best answer your question.
    • Synthesize a coherent, trusted answer based only on the information contained within those blocks.
    • Citations: AirgapAI will typically show citations, indicating which specific data blocks were used to formulate the answer, allowing for transparency and validation. This is crucial for building trust in the Artificial Intelligence's output.

Entourage Mode with Specialized Personas

Entourage Mode is a unique feature that allows you to interact with multiple Artificial Intelligence personas simultaneously, each potentially leveraging a different Large Language Model or specialized dataset, offering diverse perspectives.

  1. Select Entourage Mode Workflow: From the new chat page, choose an "Entourage Mode" quick start workflow.
  2. Configure Personas: Go to "Advanced Settings -> Personas" to configure your Artificial Intelligence personas. You can define their roles, expertise, and potentially link them to specific custom Large Language Models or datasets.
    • Example 1 (Business): Configure personas such as "Marketing Specialist," "Legal Advisor," and "Engineering Lead." When preparing a complex proposal, your question (e.g., "I am launching a new product called AirgapAI, it is a 100% local chat Large Language Model solution that is 1/10th the cost of other solutions with more capabilities, what do you think? Please answer in short sentences.") will receive distinct answers from each persona, lending different perspectives from their respective datasets.
    • Example 2 (Defense/Intelligence): Configure one persona as a "Central Intelligence Agency (CIA) Analyst" (tuned with expertise in intelligence gathering, target package details, sensitive data interpretation) and another as a "Military Tactician" (tuned for ground operations, combat strategies, tactical decision-making). You can then simultaneously ask the same question and receive distinct, multi-perspective answers for high-stakes decision-making and scenario planning.
  3. Interact: Pose a question, and responses will appear in a queue, often with a persona activity indicator showing which Artificial Intelligence is currently "typing."

Role-Based Workflows

AirgapAI simplifies complex tasks through Quick Start workflows tailored for different organizational roles.

  • Pre-configured Prompts: AirgapAI includes workflows that have pre-configured prompts. These prompts automatically select and query relevant, curated datasets based on the workflow's purpose.
  • User Profile Integration: Because AirgapAI can be tied to a user's profile on login, Information Technology can configure the application so that multiple users on the same device each leverage the application with their own isolated experiences and datasets. This is typically configured during the standard image and provisioning process.
  • Examples: Whether you're in procurement, legal, engineering, or sales, you can have a workflow (e.g., "Sales Proposal - Cover Letter") that, upon a minimal prompt (e.g., "Write a cover letter"), automatically accesses the relevant sales documents (via Blockify datasets) and generates a fully-engineered output.

By combining the power of your custom models with these versatile workflows, AirgapAI transforms your Artificial Intelligence from a generic tool into a highly specialized, secure, and invaluable asset for every department in your organization.

8. Maintaining and Evolving Your AirgapAI Deployment

Ensuring your AirgapAI deployment remains current, secure, and aligned with your evolving needs is crucial. Iternal Technologies provides robust mechanisms for updates, maintenance, and ongoing support.

Updates and Maintenance

AirgapAI is designed for enterprise-level manageability, integrating seamlessly into your existing Information Technology (IT) infrastructure.

  • Synchronized Update Cadence: Our update cadence is synchronized with your typical operating system or enterprise software update cycle. This means new versions of the AirgapAI application, data, or security patches can be deployed through familiar image management solutions.
  • Built-in Update Manager: AirgapAI features a built-in Update Manager. In the application's "Settings -> Updates" section, you can choose between updating from a "Local Server" or the "Cloud."
  • Custom Update Server Location: For organizations requiring complete control over update distribution, Information Technology can modify the update file server location. This is configured in the updaterConfig.json file, typically found at: C:\Users\[Your Username]\AppData\Local\Programs\AirgapAI Chat\resources\auto-updater\updaterConfig.json This file specifies the Uniform Resource Locator (URL) where the application checks for and downloads updates, allowing you to host updates internally if desired. An example updaterConfig.json structure looks like this:
    {"win32-x64-prod":{"readme":"","update":"https://d30h3ho4go3k4y.cloudfront.net/releases/prod/public/chat-assistant/prod/public/1.0.2/AirgapAI Chat Setup 1.0.2.exe","install":"https://d30h3ho4go3k4y.cloudfront.net/releases/prod/public/chat-assistant/prod/public/1.0.2/AirgapAI Chat Setup 1.0.2.exe","version":"1.0.2"}}
    
    (Note: The example URL points to a cloud distribution, but this can be changed to an internal server.)

Training and Support

Iternal Technologies is committed to your success with AirgapAI. We provide comprehensive training and support options to ensure your team maximizes the value of the solution.

  • Introductory Demonstrations: We offer a 30-minute introductory demonstration to provide a high-level overview and answer initial questions.
  • Personalized Training Sessions: For deeper engagement and hands-on guidance, personalized training sessions are available as an add-on service. These can be tailored to specific roles or departmental needs.
  • Online Enablement Page: Our dedicated online enablement page serves as a central hub for self-service resources. It includes:
    • Step-by-step video tutorials.
    • Frequently Asked Questions (FAQs).
    • Detailed user guides.
    • Troubleshooting tips.
  • Customer Success Team: Our customer success team is readily available for follow-up calls, additional workshops, and ongoing assistance after the initial deployment. For specific questions, you can contact the product team at support@iternal.ai.

Future Flexibility

The Artificial Intelligence landscape is dynamic, and AirgapAI is built to evolve with it.

  • Model Integration Services: If a needed Large Language Model isn't pre-quantized or readily available for local deployment, our engineering team can package and deploy it as a service. This ensures that even highly specialized or emerging models can be integrated into your AirgapAI environment.
  • Roadmap Discussions: We actively work on developing new modules and features, such as real-time language translation and support for image generation, which could further enhance operations in global, multilingual, and creative environments. These roadmap discussions are optional and can be explored based on your organizational needs.

By providing robust update mechanisms, comprehensive support, and a commitment to future flexibility, AirgapAI ensures that your investment in secure, local Artificial Intelligence continues to deliver value long after initial deployment.

9. Conclusion: Empowering Your Workforce with Secure, Tailored Artificial Intelligence

In a world increasingly dependent on data and driven by the rapid advancements of Artificial Intelligence, AirgapAI stands out as a beacon of secure, cost-effective, and highly accurate innovation. By embracing the Bring Your Own Model (BYOM) approach, you empower your organization with Artificial Intelligence that is not only powerful and productive but also deeply customized to your unique operational needs.

We've explored how AirgapAI:

  • Guarantees Data Sovereignty and Security: By running 100% locally on your Artificial Intelligence Personal Computer, ensuring your sensitive data never leaves your device and eliminating the risks associated with cloud-based solutions.
  • Delivers Unparalleled Cost Savings: Through a one-time perpetual license model, drastically reducing expenses compared to the escalating subscription fees and hidden token charges of competitors.
  • Achieves Superior Accuracy: Leveraging our patented Blockify technology to improve Large Language Model accuracy by an astounding 7,800%, virtually eliminating the debilitating problem of Artificial Intelligence hallucinations.
  • Offers Unrivaled Flexibility: With support for various open-source Large Language Models, the ability to integrate your custom models, and specialized workflows like Entourage Mode for multi-perspective analysis.

With AirgapAI, you're not just adopting an Artificial Intelligence tool; you're investing in an infrastructure that gives you complete control over your data, your models, and your budget. It's about enabling your workforce to operate with confidence, precision, and efficiency, all while building an agile, future-proof Artificial Intelligence strategy within your organization.

It's time to own your Artificial Intelligence.

Download the free trial of AirgapAI today at: https://iternal.ai/airgapai

Free Trial

Download for your PC

Experience our 100% Local and Secure AI-powered chat application on your Windows PC

✓ 100% Local and Secure ✓ Windows 10/11 Support ✓ Requires GPU or Intel Ultra CPU
Start AirgapAI Free Trial
Free Trial

Try AirgapAI Free

Experience our secure, offline AI assistant that delivers 78X better accuracy at 1/10th the cost of cloud alternatives.

Start Your Free Trial