How to Choose the Right Local Large Language Model (Llama, Mistral, DeepSeek) for Your Hardware
Become the operator who always picks “fast and accurate” over “slow and overkill.” This article helps you select the best on-device model that balances quality, speed, and battery so your team ships work, not waits on tokens.
This comprehensive guide is designed for everyone—from information technology (IT) professionals and data analysts to content creators—aiming to master the selection and utilization of on-device Artificial Intelligence (AI) models. We will walk you through matching Large Language Models (LLMs) to your specific hardware configurations, focusing on optimal performance per watt, leveraging AirgapAI's incredible 78 times accuracy boost, and ensuring reliable offline capabilities that surpass cloud-based alternatives. You'll find a quick decision matrix and practical prompts to validate the quality of your chosen model, empowering you to integrate powerful AI seamlessly into your workflow.
1. Demystifying Artificial Intelligence: What is it, and What are Large Language Models?
Before we dive into the specifics of AirgapAI, made by Iternal Technologies, let's establish a foundational understanding of Artificial Intelligence (AI) and Large Language Models (LLMs). Don't worry, we'll explain everything as if you've never encountered these terms before!
What is Artificial Intelligence (AI)?
At its core, Artificial Intelligence (AI) refers to the ability of machines to perform tasks that typically require human intelligence. This can range from simple tasks like recognizing speech or images, to more complex ones such as making decisions, solving problems, and even understanding and generating human language. Think of it as teaching a computer to think or act in ways that mimic our own intelligent behaviors.
What are Large Language Models (LLMs)?
Large Language Models (LLMs) are a specific type of Artificial Intelligence (AI) designed to understand, generate, and process human language. Imagine a super-advanced text prediction system that has read an immense amount of text from the internet—books, articles, websites, and more. Based on this vast knowledge, an LLM can:
- Answer questions: Provide information on a wide range of topics.
- Summarize documents: Condense long texts into key points.
- Generate creative content: Write stories, poems, emails, or marketing copy.
- Translate languages: Convert text from one language to another.
- Engage in conversation: Participate in a chat, much like you would with another person.
The "large" in Large Language Model refers to the sheer size of these models, both in terms of the amount of data they are trained on and the number of internal parameters they use to learn. More parameters often mean a more sophisticated understanding of language, though this also increases their computational requirements.
Local AI vs. Cloud AI: Why On-Device Matters
You might have heard of Artificial Intelligence (AI) tools like ChatGPT or Microsoft CoPilot, which are examples of cloud-based AI. This means that when you interact with them, your requests and data are sent over the internet to powerful servers (the "cloud") where the Large Language Models (LLMs) do their processing. The results are then sent back to you.
Local AI, or on-device AI, is fundamentally different. With local AI, the entire Large Language Model (LLM) and all its processing power run directly on your personal computer or device. Your data never leaves your machine; it's all processed right there. This concept is often referred to as "air-gapped" because it creates a secure, isolated environment where no data can flow in or out across a network without explicit control.
The benefits of local, on-device AI are profound, especially for businesses:
- Unparalleled Security and Data Sovereignty: Your sensitive, proprietary, or classified data remains exclusively on your device. There's no risk of it being stored, analyzed, or leaked from third-party cloud servers. This is critical for industries with strict compliance requirements, such as finance, healthcare, and government.
- Offline Functionality: Since no internet connection is required for processing, you can use your AI assistant anywhere—on a plane, in a secure facility with no network access, or even in remote field operations.
- Cost-Effectiveness: Say goodbye to recurring subscription fees, hidden token charges, and unpredictable cloud usage bills. Local AI solutions often come with a one-time perpetual license, significantly reducing your long-term operational costs.
- Reduced Latency and Enhanced Speed: Without the need to send data back and forth across the internet, responses are instantaneous, offering a seamless and highly responsive user experience.
- Customization and Control: You have direct control over the models and data used, allowing for tailored solutions that precisely fit your organizational needs.
This is precisely where AirgapAI by Iternal Technologies shines—it brings the power of sophisticated AI directly to your personal computer, completely offline and in total control.
2. Introducing AirgapAI by Iternal Technologies
AirgapAI, made by Iternal Technologies, is a groundbreaking solution that addresses the most critical challenges organizations face with Artificial Intelligence (AI) adoption: security, cost, and accuracy. It's designed to bring the immense power of Large Language Models (LLMs) directly to your workforce in a fast, secure, and highly cost-effective manner.
The Vision Behind AirgapAI
Iternal Technologies recognized that while cloud-based AI offers powerful capabilities, it presents significant hurdles for many businesses, particularly concerning data privacy, unpredictable costs, and the infamous "AI hallucination" problem (where AI generates incorrect or nonsensical information). AirgapAI was built from the ground up to overcome these challenges, enabling organizations to leverage advanced AI without compromise.
Key Value Propositions and Differentiators
AirgapAI distinguishes itself with several core benefits:
- 100 Percent Local and Air-Gapped Operation: This is the cornerstone of AirgapAI. The entire application and its Large Language Models (LLMs) run exclusively on your personal computer. Your data never leaves the device, eliminating the risks associated with external network breaches or cloud storage. This ensures absolute data sovereignty and security, making it ideal for the most sensitive environments.
- Patented Blockify Technology for 78 Times Accuracy: One of the biggest fears with AI is its tendency to "hallucinate" or provide inaccurate information. AirgapAI tackles this head-on with its proprietary Blockify technology. This innovative data ingestion and optimization solution refines your organizational data, structuring it for precision. The result? A remarkable 7,800 percent (%) improvement in LLM accuracy, meaning 78 times fewer hallucinations than traditional methods. This builds trust and ensures reliable, actionable insights from your AI.
- Unbeatable Cost-Effectiveness: AirgapAI is available as a one-time perpetual license per device, a stark contrast to the costly, recurring subscription models of cloud alternatives. Customers often find AirgapAI to be roughly one-tenth to one-fifteenth the cost of solutions like Microsoft CoPilot or ChatGPT Enterprise, offering massive savings while driving a high return on investment (ROI).
- Fast Deployment and Immediate Value: Designed as an "easy button" for generative AI, AirgapAI boasts a simple, one-click installation. It integrates seamlessly into existing information technology (IT) imaging workflows, allowing for rapid deployment across your entire fleet. Users can be up and running and experiencing tangible benefits in a matter of minutes, not months.
- Optimized for AI PCs: AirgapAI is built to harness the full potential of modern Artificial Intelligence Personal Computers (AI PCs), leveraging their Central Processing Units (CPUs), Graphics Processing Units (GPUs), and Neural Processing Units (NPUs) for optimal performance and energy efficiency.
- Flexible Large Language Model Support: Users can choose from a suite of pre-quantized, open-source Large Language Models (LLMs) or even bring their own models, ensuring adaptability to evolving needs and specific use cases.
- Role-Based Workflows and Entourage Mode: AirgapAI includes Quick Start workflows tailored for different roles, pre-configuring prompts and selecting relevant datasets. The unique Entourage Mode allows users to interact with multiple AI personas simultaneously, gaining diverse expert perspectives for complex decision-making and scenario planning.
AirgapAI empowers your workforce with advanced, secure, and trustworthy Artificial Intelligence (AI) capabilities, putting you in control of your data and your budget.
3. Understanding Your Hardware: The Foundation of Local Artificial Intelligence
To maximize the performance of your local Artificial Intelligence (AI) solution, AirgapAI, it's crucial to understand the computing components within your device. AirgapAI is designed to utilize all available resources, providing flexibility and efficiency across a range of hardware configurations.
The Three Pillars of Modern Computing: CPU, GPU, and NPU
Modern personal computers, especially Artificial Intelligence Personal Computers (AI PCs), come equipped with specialized processors that each play a vital role in handling different types of tasks. For AI, three components are key:
- Central Processing Unit (CPU): - What it is: Often called the "brain" of the computer, the Central Processing Unit (CPU) is a general-purpose processor responsible for executing most of the instructions that make your computer work. It handles everything from operating system functions to running your applications.
- Role in AI: For AI, the CPU is excellent for handling sequential tasks, managing system resources, and running smaller Large Language Models (LLMs) or portions of larger ones, especially when other specialized processors are unavailable or when power efficiency is prioritized for less intensive tasks. AirgapAI can efficiently use the CPU to search through vast amounts of data very quickly.
- AirgapAI application: If you have older or less powerful hardware, AirgapAI has a CPU-only option ensuring broad compatibility.
 
- Graphics Processing Unit (GPU): - What it is: Originally designed to accelerate the rendering of graphics and video, Graphics Processing Units (GPUs) are highly specialized for parallel processing. This means they can perform many calculations simultaneously, making them ideal for tasks that involve large datasets and repetitive operations.
- Role in AI: In Artificial Intelligence (AI), GPUs are workhorses for training and running Large Language Models (LLMs) because AI computations often involve thousands of parallel mathematical operations. A powerful Graphics Processing Unit (GPU), whether integrated into your main processor or as a dedicated card, can significantly speed up AI inference (the process of getting answers from an LLM).
- AirgapAI application: AirgapAI can run large LLMs on your high-performing integrated or dedicated Graphics Processing Unit (GPU), without incurring the additional costs of specialized data center hardware.
 
- Neural Processing Unit (NPU): - What it is: The Neural Processing Unit (NPU) is a newer, dedicated accelerator specifically designed for Artificial Intelligence (AI) workloads. It's built for sustained, energy-efficient processing of AI tasks, making it ideal for on-device AI that needs to run continuously without draining battery life.
- Role in AI: Neural Processing Units (NPUs) handle heavy Artificial Intelligence (AI) workloads at very low power consumption. They are particularly good for tasks like real-time AI effects (e.g., enhanced video calls), background AI processing, and running optimized Large Language Models (LLMs) with high power efficiency.
- AirgapAI application: AirgapAI can leverage your next-generation Neural Processing Unit (NPU) for the most power-efficient Large Language Model (LLM) inferencing.
 
The Rise of the Artificial Intelligence Personal Computer (AI PC)
An Artificial Intelligence Personal Computer (AI PC) is more than just a regular computer; it's a system specifically engineered to accelerate Artificial Intelligence (AI) tasks locally. It typically features a powerful Central Processing Unit (CPU), an advanced Graphics Processing Unit (GPU) (often integrated), and a dedicated Neural Processing Unit (NPU) that all work in harmony.
Why an AI PC is ideal for AirgapAI:
- Balanced Performance and Power Efficiency: AI PCs intelligently distribute Artificial Intelligence (AI) workloads across the Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Neural Processing Unit (NPU). This ensures that tasks are handled by the most appropriate component, leading to optimal performance without excessive power consumption.
- Enhanced Local AI Experience: With dedicated AI hardware, an AI PC provides a smoother, faster, and more responsive experience for on-device Large Language Models (LLMs), allowing AirgapAI to deliver its full potential.
- Future-Proof Investment: As Artificial Intelligence (AI) capabilities become more integral to everyday computing, an AI PC positions you to take advantage of the latest advancements.
AirgapAI System Requirements and Prerequisites
To ensure a smooth and effective experience with AirgapAI, your system should meet certain specifications. The following table outlines the minimum and recommended hardware requirements:
| Component | Minimum | Recommended | 
|---|---|---|
| Central Processing Unit (CPU) | 8 Cores | 8 Cores / 16 Threads or better | 
| Random Access Memory (RAM) | 16 Gigabytes (GB) | 32 Gigabytes (GB) or more | 
| Disk Storage | 10 Gigabytes (GB) free (Solid State Drive (SSD)) | 50 Gigabytes (GB) Non-Volatile Memory Express (NVMe) | 
| Graphics Processing Unit (GPU) | 4 Gigabytes (GB) Video Random Access Memory (VRAM) 2024 or Newer | 8 Gigabytes (GB) Video Random Access Memory (VRAM) or more | 
| Operating System (OS) | Windows 11 | Latest patches | 
Permissions: You will need security permissions to install the application.
AirgapAI's intelligent design ensures that it will function across a broad spectrum of hardware, dynamically adapting to your device's capabilities to provide the best possible Artificial Intelligence (AI) experience.
4. The Heart of Local AI: Choosing Your Large Language Model
The Large Language Model (LLM) is essentially the "brain" of your Artificial Intelligence (AI) assistant. Choosing the right one for AirgapAI involves balancing computational requirements with the desired level of intelligence and responsiveness.
What is an LLM "Model"?
In the context of Artificial Intelligence (AI), a "model" is a highly complex mathematical structure that has been trained on vast amounts of data. For Large Language Models (LLMs), this data consists of text and code. Through this training, the model learns patterns, grammar, facts, and even nuances of human language.
Think of it like this: if AirgapAI is the vehicle (the application that runs on your computer), the LLM is the engine. A more powerful engine (a larger, more sophisticated model) might offer better performance and more complex capabilities, but it will also require more fuel (more computing power from your Central Processing Unit (CPU), Graphics Processing Unit (GPU), or Neural Processing Unit (NPU)).
Model Size (Parameters) and Its Impact
Large Language Models (LLMs) are often described by their "parameter count." Parameters are the variables within the model that are adjusted during its training phase. Generally:
- More parameters = more knowledge and complexity: Models with billions of parameters tend to have a deeper understanding of language, can perform more intricate tasks, and are less prone to making simple errors.
- More parameters = higher computational demands: Larger models require significantly more Random Access Memory (RAM) and Video Random Access Memory (VRAM) on your Graphics Processing Unit (GPU), and more processing power from your Central Processing Unit (CPU) or Neural Processing Unit (NPU). This can impact speed and battery life on less powerful hardware.
The good news is that advancements in AI are constantly making models smaller, more efficient, and smarter. What once required a huge model now fits into a more compact size, perfectly suited for on-device operation.
Common Open-Source Models Supported by AirgapAI
AirgapAI offers flexibility, allowing you to use a variety of popular open-source Large Language Models (LLMs). These models are often "quantized," meaning they've been optimized to run more efficiently on consumer hardware without significant loss in quality. Here are some examples you might encounter:
- Llama (provided with app): A family of highly capable models developed by Meta. Llama models are known for their strong performance across various tasks. AirgapAI often provides:- Llama-1B (1 Billion parameters): Ideal for devices with integrated Graphics Processing Units (iGPUs) from 2024 or low-power systems. It offers a good balance of speed and capability for general tasks.
- Llama-3B (3 Billion parameters): A step up, suitable for integrated Graphics Processing Units (iGPUs) from 2025 or systems with dedicated Graphics Processing Units (GPUs). It provides enhanced intelligence for more demanding tasks.
 
- Mistral: Known for its efficiency and strong performance, especially for its size. Mistral models are a great choice for balancing capability and speed on local devices.
- DeepSeek: Another family of open-source models offering competitive performance, often excelling in specific areas like coding or logical reasoning.
You can bring your own models as well, giving you ultimate control over your AI experience.
How to Choose the Right Model Based on Your Hardware
Matching your Large Language Model (LLM) to your hardware ensures you get the best performance without overtaxing your system. Think of your computer's specifications as different tiers of an Artificial Intelligence Personal Computer (AI PC):
- Entry-Level AI PC (Minimum Specifications):- Hardware: Older integrated Graphics Processing Unit (GPU) or relying heavily on Central Processing Unit (CPU). 16 Gigabytes (GB) Random Access Memory (RAM).
- Recommended Model: Llama-1B.
- Experience: Good for basic summarization, simple question answering, and quick content generation. Focus on speed and low power consumption.
 
- Good AI PC (Recommended Specifications):- Hardware: Newer integrated Graphics Processing Unit (iGPU) (e.g., from 2024-2025) or entry-level dedicated Graphics Processing Unit (GPU). 32 Gigabytes (GB) Random Access Memory (RAM).
- Recommended Model: Llama-3B, smaller Mistral or DeepSeek models.
- Experience: Excellent for more complex content creation, detailed summarization, and more nuanced question answering. Noticeably faster and more capable than entry-level.
 
- Better AI PC (Higher-End Specifications):- Hardware: Mid-range dedicated Graphics Processing Unit (GPU) (e.g., 8 Gigabytes (GB) Video Random Access Memory (VRAM) or more). Robust Central Processing Unit (CPU) and ample Random Access Memory (RAM).
- Recommended Model: Larger Llama, Mistral, or DeepSeek models.
- Experience: Unlocks advanced capabilities, faster generation for longer responses, and the ability to run multiple AI personas simultaneously in Entourage Mode with ease.
 
- Ultra AI PC (Cutting-Edge Specifications):- Hardware: High-end dedicated Graphics Processing Unit (GPU) with significant Video Random Access Memory (VRAM), latest Central Processing Unit (CPU), and Neural Processing Unit (NPU).
- Recommended Model: Largest available open-source models; capable of running highly complex tasks with extreme speed and power efficiency.
- Experience: Top-tier performance for all AI tasks, including real-time, multi-persona interactions, and very large context windows.
 
Quick Decision Matrix: Choosing Your LLM
| Your Hardware Tier | Key Features | Recommended LLM(s) | Expected Experience | 
|---|---|---|---|
| Entry-Level AI PC (Minimum) | Older iGPU / CPU-focused, 16 GB RAM | Llama-1B | Basic summarization, quick Q&A, good for initial exploration. | 
| Good AI PC (Recommended) | Newer iGPU (2024-2025) / Entry Dedicated GPU, 32 GB RAM | Llama-3B, smaller Mistral/DeepSeek | Enhanced content creation, detailed summaries, faster responses. | 
| Better AI PC (Higher-End) | Mid-range Dedicated GPU (8 GB+ VRAM) | Larger Llama, Mistral, DeepSeek | Advanced capabilities, long-form content, smooth Entourage Mode. | 
| Ultra AI PC (Cutting-Edge) | High-end Dedicated GPU + Latest CPU + NPU | Largest open-source models | Top-tier performance for all AI tasks, real-time complex interactions. | 
Remember, AirgapAI's core strength, the Blockify technology, will significantly boost the accuracy of any Large Language Model (LLM) you choose, making even smaller models incredibly effective with your specific data.
5. The Power of Blockify: Enhancing Large Language Model Accuracy
One of the most significant challenges with Artificial Intelligence (AI), particularly with Large Language Models (LLMs), is the phenomenon of "hallucinations"—where the AI generates plausible-sounding but factually incorrect information. This erodes trust and limits the practical application of AI in critical business contexts. AirgapAI, made by Iternal Technologies, solves this problem with its patented Blockify technology.
Blockify is not just a feature; it is the ultimate data management solution for Large Language Models (LLMs) at scale, designed to transform your messy, unstructured data into a precise, single source of truth that your AI can trust.
What is Blockify and Why is it Revolutionary?
Blockify is a sophisticated data ingestion and optimization engine that radically improves the accuracy and reliability of your Large Language Models (LLMs) when interacting with your private organizational data. It acts as a bridge between your vast stores of documents and the Large Language Model (LLM), ensuring that the AI always draws from validated, high-quality information.
Key benefits of Blockify:
- Elimination of AI Hallucinations: By structuring data into a "single source of truth," Blockify drastically reduces the chances of an Artificial Intelligence (AI) hallucination. This means you can trust the answers AirgapAI provides, knowing they are grounded in your validated corporate knowledge.
- 78 Times (7,800%) Improvement in LLM Accuracy: This is a monumental leap. Blockify is proven to boost the accuracy of Large Language Models (LLMs) by 78 times, meaning a 7,800 percent (%) improvement in reliability. This transforms AI from a risky tool into a trustworthy, indispensable asset.
- Optimized for Retrieval-Augmented Generation (RAG): Blockify's output is perfectly tailored for Retrieval-Augmented Generation (RAG) systems. In RAG, the Large Language Model (LLM) first retrieves relevant information from a trusted data source (your Blockified data) before generating an answer, ensuring accuracy and providing citations.
- Significant Data Reduction: The process can reduce the original data size by as much as 97.5 percent (%), distilling massive documents into just 2.5 percent (%) of their original volume without losing critical information. This makes your data easier to manage and faster for the AI to process.
The Blockify Process: From "Docs" to "Blocks"
Blockify takes your raw documents—which could be thousands of sales proposals, request for proposal (RFP) responses, technical manuals, or legal contracts—and processes them through a meticulous workflow:
- Data Ingestion: Blockify ingests large and diverse datasets, supporting multiple file formats including text, Hypertext Markup Language (HTML), Portable Document Format (PDF), Microsoft Word, Microsoft PowerPoint, and graphic files. For video content, it can extract still frames or transcribe audio as needed.
- Deduplication and Distillation: The system intelligently identifies and removes redundant or outdated information, then condenses the remaining essential data into concise, modular units called "blocks."
- Block Creation: Each block is meticulously crafted to contain three key elements, optimizing it for Large Language Model (LLM) understanding and precise answering:- A Name: This is a descriptive title (displayed in blue within the interface) that quickly identifies the content topic of the block.
- A Critical Question: This is the key query that a customer or user might ask related to the block's content (often displayed in bold, italicized text).
- A Trusted Answer: This is the distilled, accurate, and approved response to the critical question, directly derived from your source material. It avoids the pitfalls of outdated or redundant data.
 
- Rich Metadata Tagging: To support zero-trust environments and robust data governance, each block is tagged with rich metadata. This includes:- Classification: Defining the type of information (e.g., financial, legal, technical).
- Permissions: Specifying which user roles or departments can access this information.
- Classification Levels: Indicating sensitivity (e.g., public, internal, confidential, top secret).
 
- Human Review Loop: After initial ingestion, these blocks are sent for a quick human review. This crucial step allows your subject matter experts to:- Update Messaging: Ensure the trusted answer reflects the latest information and corporate messaging.
- Approve Content: Validate the accuracy and appropriateness of the block.
- Flag Outdated Content: Identify and remove or update information that is no longer relevant (e.g., from 2019), preventing it from impacting AI responses.
 
- Dataset Creation and Management: Once blocks are reviewed and approved, they form curated datasets. As new documents are Blockified, these datasets can be updated and then pushed to local devices via standard image management applications like Microsoft Intune, ensuring that your workforce always has access to the most current and accurate information.
By transforming your raw data into a structured, validated, and highly accurate corpus of "blocks," AirgapAI's Blockify technology fundamentally changes how Large Language Models (LLMs) interact with information, building unparalleled trust and unlocking unprecedented levels of AI accuracy for your organization.
6. AirgapAI in Action: Workflow, Installation, and Everyday Use
Now that you understand the underlying technology, let's walk through the practical workflow of installing and using AirgapAI, made by Iternal Technologies. We will cover everything from the initial setup to engaging with its powerful features.
A. Installation Protocol
Installing AirgapAI is designed to be straightforward and integrate seamlessly into your existing Information Technology (IT) infrastructure.
- Download the Installer Package: - Obtain the latest ZIP archive from the internal or cloud link provided by your IT department.
- Save it to your "Downloads" folder or any other writable location on your computer.
- Example file name: AirgapAI-v1.0.2-Install.zip
 
- Extract the Installer: - Locate the downloaded ZIP file.
- Right-click on the file (e.g., AirgapAI-v1.0.2-Install.zip).
- Select "Extract All..." from the context menu.
- Choose a destination folder for the extracted files (the default is usually a new folder within your "Downloads" directory).
- Click "Extract."
 
- Run the Installer: - Open the newly extracted folder.
- Double-click on the executable (EXE) file named AirgapAI Chat Setup.exe.
- If your operating system (OS) security features (like SmartScreen or Gatekeeper) prompt you, choose "Allow" or "Run anyway" to proceed with the installation.
 
- Follow the Installer Wizard: - Read and accept the license agreement.
- Choose whether to create a Desktop Shortcut (recommended for easy access).
- Click "Install."
- Click "Finish" once the installation is complete.
 
B. First-Launch Onboarding Wizard
Upon launching AirgapAI Chat for the very first time (via your new desktop shortcut or Start menu entry), the application will check for existing models. If none are found, an intuitive Onboarding flow will guide you through the initial setup.
- Start Onboarding: - Click the "Start Onboarding" button.
 
- Profile & Chat Style: - Enter a display name for yourself (the default is typically "You").
- Pick a preferred Chat Style. AirgapAI offers various aesthetic options such as "Iternal Professional," "Casual," "Dark Mode," "Retro," and more.
- Click "Next."
 
- Uploading the Core Large Language Model (LLM): - On the "Models" screen, you'll see an "Available Models" drop-down menu, which will initially be empty.
- Click the "Upload Model" button.
- Browse to the /models/subfolder located within the folder where you extracted the installer package.
- Choose a model suited to your hardware (refer back to our Decision Matrix in Section 4):- Llama-1B: Recommended for 2024 integrated Graphics Processing Units (iGPUs) or low-power machines.
- Llama-3B: Recommended for integrated Graphics Processing Units (iGPUs) from 2025 or devices with a dedicated Graphics Processing Unit (GPU).
 
- Click "Save." The upload typically takes about 30 seconds.
- Note: Information Technology (IT) administrators configuring the system can also add or update Large Language Models (LLMs) by accessing the folder created after model upload, which is typically found within %appdata%atC:\Users\YourUsername\AppData\Roaming\IternalModelRepo.
 
- Uploading an Embeddings Model: - Still on the onboarding page, click the "Upload Embeddings Model" button.
- Open the /models/subfolder again and selectJina-Embeddings.zip.
- Click "Save." This upload also takes approximately 30 seconds.
- Note: Information Technology (IT) administrators can update Embeddings Models by modifying the contents of the files saved within %appdata%atC:\Users\YourUsername\AppData\Roaming\IternalModelRepo.
 
- Adding Sample or Custom Datasets: - Datasets are crucial for Retrieval-Augmented Generation (RAG), allowing AirgapAI to provide highly accurate answers from your specific data.
- Click the "Upload Dataset" button.
- Navigate to the /datasets/subfolder from your install folder.
- Select the sample dataset CIA_World_Factbook_US.jsonl.
- Click "Save."
- Tip: While you can upload Microsoft Word, Portable Document Format (PDF), or plain text (.TXT) files directly, converting larger corpora to Blockify format (as described in Section 5) is recommended for approximately 78 times accuracy gain. Local on-device Blockify capabilities will be available starting in Quarter 3 2025.
- Note: Information Technology (IT) administrators updating datasets can push new updates by modifying the contents of the files saved within %appdata%atC:\Users\YourUsername\AppData\Roaming\airgap-ai-chat\CorpusRepo.
 
- Finish Onboarding: - Verify that all three items (Core LLM, Embeddings Model, and at least one Dataset) are added.
- Click "Continue." AirgapAI Chat will now boot with your selected configurations.
 
C. Optional Setup Steps for IT Teams
For Information Technology (IT) teams desiring advanced functionality, AirgapAI Chat supports native integration with Dell Technologies' Dell Pro AI Studio (DPAIS).
- Install DPAIS Files: As the IT System's administrator, install the required files to enable a Large Language Model (LLM) via Dell Pro AI Studio (DPAIS). Both Intel and Qualcomm are supported.
- Validate API Endpoints: After Dell Pro AI Studio (DPAIS) services are running, validate that the local Large Language Model (LLM) Application Programming Interface (API) endpoints can be called.
- Set Environment Variable: Open PowerShell and input the following command:[System.Environment]::SetEnvironmentVariable("DPAIS_ENDPOINT", "http://localhost:8553/v1/openai", "User")
- Relaunch AirgapAI Chat: Relaunch the AirgapAI Chat application, and the Dell Pro AI Studio (DPAIS) Large Language Models (LLMs) available will automatically appear in the model selection menu within the settings page.
D. Initial Model Benchmarking
Upon first model launch, AirgapAI Chat offers to Benchmark your hardware. This is highly recommended.
- Click "Run Benchmark."
- Duration: Approximately 2 minutes.
- Purpose: It measures key metrics like tokens per second and inference speed, providing valuable insights into your system's AI performance.
- You can skip the benchmark, but context-size limits will remain at a conservative 2,000 tokens until a benchmark is completed. After benchmarking, you can change the token context window by visiting "Settings > Chat" and dragging the slider to the desired size (up to 32,000 tokens or more, depending on your model and hardware).
E. Everyday Workflows with AirgapAI Chat
Once installed and configured, AirgapAI Chat provides a powerful and intuitive interface for various Artificial Intelligence (AI) tasks.
- File Upload & Summarization: - Action: Drag and drop a file (Portable Document Format (PDF), Microsoft Word Document (DOCX), or plain text (.TXT)) directly onto the chat window, or click the paperclip icon (📎) to browse for a file.
- Prompt Example: "Summarize this document in bullet points."
- Result: AirgapAI Chat instantly embeds the document's content and provides a concise summary.
 
- Guided Demo Workflows: - Location: Find these on the Workflow Bar, located below the new chat window.
- Example Scenario: Imagine needing to draft a sales proposal cover letter.
- Action: Select the "Sales Proposal – Cover Letter" workflow. Upload any supporting documents. Enter a minimal or robust prompt (e.g., "Write a cover letter for a new Dell AI PC").
- Result: Receive a fully engineered output. Click "Copy" (📋) to place the text on your clipboard.
 
- Retrieval-Augmented Question Answering (RAG) with Blockify Datasets: - This is where the power of Blockify (Section 5) truly shines.
- Action: Toggle your dataset ON in the sidebar. For instance, select the "Iternal Technologies Enterprise Portfolio Overview" dataset (extracted from sales materials) or the "CIA World Factbook for USA" dataset.
- Query Your Dataset (GPU) Workflow: Use prompts like "What is Iternal Technologies?" or "What is AirgapAI?"
- Example Prompt: "What are the major political parties in the United States?" (with the CIA World Factbook dataset selected).
- Result: The Retrieval-Augmented Generation (RAG) engine fetches the most relevant "IdeaBlocks" from your curated dataset, then the Large Language Model (LLM) synthesizes a coherent, trusted answer, often showing the citations it used.
 
- Entourage Mode (Multi-Persona Chat): - Concept: A unique feature allowing users to interact with multiple AI personas simultaneously, each drawing from different, specialized datasets.
- Action: Select an "Entourage Mode" quick start workflow from the new chat page. You can configure personas in "Advanced Settings > Personas" (e.g., Marketing, Sales, Engineering).
- Recommended Prompt: "I am launching a new product called AirgapAI, it is a 100 percent (%) local chat Large Language Model (LLM) solution that is one-tenth the cost of other solutions with more capabilities, what do you think? Please answer in short sentences."
- Scenario Example: When preparing a complex proposal, your marketing, legal, and technical support personas can weigh in, lending different perspectives from their respective datasets. In a defense or intelligence scenario, you could configure one persona as a Central Intelligence Agency (CIA) analyst (with expertise in intelligence gathering and sensitive data interpretation) and another as a military tactician (tuned for insights on ground operations and combat strategies). Users can ask the same question and receive distinct, multi-perspective answers, supporting high-stakes decision-making.
- Result: Responses appear in a queue, with a persona activity indicator showing which persona is "typing" a response.
 
- Multilingual Conversations: - Action: Simply prompt the AI in the language you desire.
- Prompt Example: "Tell me a short story in German about renewable energy."
- Result: The Large Language Model (LLM) switches language seamlessly. You can click "Stop" to halt generation at any time.
 
F. AirgapAI Application Overview and Role-Based Segmentation
- AirgapAI is a local, modular application that integrates seamlessly into standard imaging workflows. It's distributed as an executable (EXE) application.
- Role-based segmentation: Because the application is tied to the user’s profile on login, multiple users of the same device can each leverage the application with their own isolated experiences and datasets. This is configured per user profile through standard image and provisioning processes.
G. Advanced Configuration
AirgapAI offers several advanced settings to tailor your experience:
- Context-Window Expansion: After benchmarking, go to "Settings > Model Settings" and adjust "Max Tokens" up to 32,000 (or more, depending on your hardware and model). This allows the AI to consider more of your conversation history for better context.
- Styling & Themes: Personalize your interface by going to "Settings > Appearance" to switch between predefined themes or even build custom Cascading Style Sheets (CSS).
- Workflow Templates: "Settings > Workflows" allows Information Technology (IT) teams to add or edit prompt chains, ideal for pre-loading company-specific tasks and standardizing AI interactions.
- In-App Benchmarking Suite: The "Settings > Benchmarking" tab lets you test the performance of new models on your hardware, helping you optimize your choices.
7. Maintenance, Updates, and Support for AirgapAI
Ensuring your AirgapAI solution, made by Iternal Technologies, remains current, secure, and fully functional is straightforward, with built-in mechanisms for updates and robust support channels.
A. Ongoing Updates and Maintenance
AirgapAI is designed for easy, enterprise-level management:
- Update Cadence: Our update schedule is synchronized with your typical operating system (OS) or enterprise software update cycle. This means Information Technology (IT) departments can manage AirgapAI updates alongside their existing processes, reducing complexity.
- Deployment: Whether pushing new datasets (which are updated as new documents are Blockified) or security patches, IT can deploy new versions and updated content through familiar image management solutions like Microsoft Intune or other similar applications.
- Update Manager: Updates are delivered by the built-in Update Manager. You can choose to receive updates from a "Local Server" (ideal for secure, air-gapped environments) or a "Cloud" source, configurable in "Settings > Updates."- Note: You can change the file server update location by modifying the updaterConfig.jsonfile located atC:\Users\John\AppData\Local\Programs\AirgapAI Chat\resources\auto-updater\updaterConfig.json. This file specifies the Uniform Resource Locator (URL) where the application checks for new releases.
 
- Note: You can change the file server update location by modifying the 
B. Deployment and Multi-User Access
AirgapAI is built with enterprise deployment in mind:
- Installation Protocol: AirgapAI is delivered as an executable (EXE) file that integrates straightforwardly into your standard Windows imaging process. Our deployment manual provides detailed instructions on imaging, provisioning, and role-specific configuration.
- Seed Deployments: For initial pilot deployments, the process is coordinated closely with the Iternal Technologies team, ensuring the application and all intended datasets (pre-packaged via Blockify) are pre-loaded for a seamless start.
- Multi-User Support: AirgapAI runs directly on each client device, integrated into your standard image-provisioning process. For secure multi-user environments on a single device, Information Technology (IT) can configure the image so that each user accesses personalized, role-specific datasets stored securely within their individual user folder. This ensures data isolation and tailored experiences.
C. Training and Support
Iternal Technologies is committed to your success with AirgapAI:
- Introductory Training: We offer a 30-minute introductory demonstration to get you started, followed by personalized training sessions as an add-on service to deepen your team's expertise.
- Online Enablement Page: A comprehensive online resource is available, featuring:- Step-by-step video tutorials.
- Frequently Asked Questions (FAQs).
- Detailed user guides.
- Troubleshooting tips.
 
- Customer Success Team: Our dedicated customer success team is available for follow-up calls, additional workshops, and ongoing support after initial deployment, ensuring you continually derive maximum value from AirgapAI.
- Direct Contact: For additional questions or support, you can contact the product team directly at support@iternal.ai.
With these robust systems for updates and support, AirgapAI ensures a consistent, secure, and high-performing Artificial Intelligence (AI) experience for your entire organization.
8. The Unrivaled Benefits of AirgapAI: Become Unstoppable
You've learned how Artificial Intelligence (AI) works, how AirgapAI leverages your hardware, and how Blockify ensures unprecedented accuracy. Now, let's tie it all together to understand what AirgapAI, made by Iternal Technologies, truly offers. It's not just about an application; it's about transforming your capabilities and becoming the operator who has their AI together.
The Core Value Proposition: Trusted, Secure, Cost-Effective
AirgapAI stands on three unwavering pillars, delivering an AI experience that is simply unmatched by cloud-based alternatives:
- Trusted: With the patented Blockify technology, your Large Language Models (LLMs) achieve an astonishing 78 times (7,800%) greater accuracy. This means no more second-guessing AI output. You gain reliable, fact-checked answers directly from your validated corporate data. You become the individual who relies on trusted insights, not guesswork.
- Secure: By running 100 percent (%) locally on your Artificial Intelligence Personal Computer (AI PC), AirgapAI ensures your most sensitive data never leaves your device. This air-gapped security protects against external breaches, upholds data sovereignty, and meets the strictest compliance requirements. You become the leader who ensures absolute data privacy in an AI-driven world.
- Cost-Effective: Say goodbye to the endless cycle of subscriptions and hidden fees. AirgapAI offers a one-time perpetual license at a fraction of the cost (often 1/10th to 1/15th) of cloud competitors. This delivers immediate Return On Investment (ROI) and predictable budgeting. You become the strategist who achieves maximum value without sacrificing quality.
What You Become With AirgapAI
AirgapAI isn't just a tool; it's a vehicle to empower you and your team to be:
- Undeniably Productive: Accelerate tasks from document analysis to content creation, freeing up valuable time for strategic work.
- Decisively Confident: Make high-stakes decisions with the assurance that your AI's insights are accurate, multi-perspective (thanks to Entourage Mode), and grounded in your organization's trusted data.
- Always Connected, Anywhere: Leverage powerful AI capabilities even when offline, ensuring productivity whether you're in the field, on a secure network, or without internet access.
- In Control: Have complete autonomy over your data, models, and workflows, tailoring AI to your exact needs without external dependencies.
- Forward-Thinking: Invest in a solution that is optimized for cutting-edge Artificial Intelligence Personal Computers (AI PCs) and designed for continuous improvement, positioning you at the forefront of AI adoption.
AirgapAI by Iternal Technologies empowers you to move beyond the limitations and risks of traditional AI, to embrace a future where AI is not just powerful, but also private, precise, and practical.
9. Conclusion
In a world increasingly reliant on Artificial Intelligence (AI), the ability to choose and implement a solution that offers uncompromising security, remarkable accuracy, and exceptional cost-effectiveness is paramount. AirgapAI, made by Iternal Technologies, stands as that definitive solution. By understanding your hardware, selecting the right local Large Language Model (LLM), and leveraging the transformative power of Blockify technology, you can unlock an AI experience that is not only robust but also perfectly tailored to your organizational needs.
AirgapAI puts you in command, transforming your raw data into trusted insights, empowering your workforce with a fast and secure AI assistant, and ensuring your most sensitive information remains exactly where it belongs—within your control. It’s time to move beyond the compromises of cloud-based AI and embrace the future of on-device, private intelligence.
Download the free trial of AirgapAI today at: https://iternal.ai/airgapai