How to Successfully Implement an Internal AI Assistant?

2025 04 01

Large Language Models (LLMs) offer immense opportunities for businesses in knowledge management, automation, and internal support. However, integrating such AI solutions within a corporate environment presents significant challenges. The Unmute All podcast by Deutsche Telekom IT Solutions featured Máté Gaál, Digitalization and Automation Manager, who shared his insights on how to successfully implement an internal ChatGPT-like solution.

Requirements for using LLMs in a corporate environment

For an LLM to function effectively within a company, it must be able to access corporate knowledge while adhering to strict security regulations. Since LLMs are originally trained on general datasets, they may not be familiar with a company’s internal policies, processes, or specific terminology. Therefore, a system must be established that supplements the AI with internal documents and databases while ensuring that sensitive or confidential information is safeguarded. Additionally, the technical infrastructure must be capable of supporting AI operations, as large language models require substantial computing power.

Optimizing documents for AI processing

Corporate documents often need adjustments to be AI-friendly. Experience shows that long, complex sentences and inconsistent terminology make it difficult for AI to accurately process information. To improve efficiency, internal documentation should be structured more clearly, with well-defined headings and consistent terminology. Content tables also help LLMs find relevant information faster and more accurately.

The importance of data security

One of the most critical aspects of internal AI solutions is data security. Since these systems process company information, strict access control must be implemented to regulate who can retrieve specific data. The AI should only process documents that an employee is already authorized to access. Additionally, output monitoring is crucial, as AI models may sometimes generate misleading or unintended responses. Proper access management and continuous oversight are essential for ensuring the safe operation of AI.

What are the benefits of an internal LLM?

If properly set up, an internal ChatGPT-like solution can offer numerous advantages. Employees can quickly and efficiently access the information they need, reducing the time spent on repetitive queries. AI can assist with HR inquiries, IT support, and corporate policy interpretation, helping to ease the workload of subject matter experts. Moreover, a well-functioning AI solution promotes knowledge sharing within the company by making internal documentation more searchable and accessible.

Challenges and limitations

Several challenges emerged during implementation. The biggest hurdle was the Hungarian language, as long, complex sentences and specific corporate terminology initially caused issues for the AI. Another challenge was that general LLM models generate responses based on probabilities, which can sometimes result in inaccuracies or irrelevant answers. To address this, Deutsche Telekom incorporated a knowledge graph into the system, helping structure information more effectively and improve retrieval accuracy. However, internal AI solutions remain limited in some areas—for example, they cannot compare the company’s organizational structure to those of competitors since they do not have access to public internet data.

Conclusion

The Unmute All podcast discussion highlighted that while internal LLM integration offers significant advantages, successful deployment requires thorough preparation, continuous fine-tuning, and strict security measures. AI is not a magic solution, but when properly configured, it can become a highly effective tool for supporting corporate operations.

Listen to the episode here (Hungarian): https://www.deutschetelekomitsolutions.hu/podcasts/mire-jo-a-tuningolt-llm-es-miben-kulonbozik-a-sima-chatgpt-tol/