Skip to main content

Why your business needs private AI (not ChatGPT)

·1110 words·6 mins
Articoli AI Privacy GDPR Best Practices
Articoli Interessanti - This article is part of a series.
Part : This Article
Part : How to Build an Agent - Amp **Introduction** Building an agent, especially one that leverages the power of Amp, involves several key steps. Amp, which stands for Advanced Multi-Purpose Protocol, is a versatile framework designed to enhance the capabilities of agents in various domains. This guide will walk you through the process of creating an agent using Amp, from conceptualization to deployment. **1. Define the Purpose and Scope** Before diving into the technical details, it's crucial to define the purpose and scope of your agent. Ask yourself the following questions: - What specific tasks will the agent perform? - In what environments will the agent operate? - What are the key performance metrics for success? **2. Choose the Right Tools and Technologies** Selecting the appropriate tools and technologies is essential for building a robust agent. For an Amp-based agent, you might need: - **Programming Languages**: Python, Java, or C++ are commonly used. - **Development Frameworks**: TensorFlow, PyTorch, or custom frameworks compatible with Amp. - **Data Sources**: APIs, databases, or real-time data streams. - **Communication Protocols**: HTTP, WebSockets, or other protocols supported by Amp. **3. Design the Agent Architecture** The architecture of your agent will determine its efficiency and scalability. Consider the following components: - **Input Layer**: Handles data ingestion from various sources. - **Processing Layer**: Processes the data using algorithms and models. - **Output Layer**: Delivers the results to the end-users or other systems. - **Feedback Loop**: Allows the agent to learn and improve over time. **4. Develop the Core Functionality** With the architecture in place, start developing the core functionality of your agent. This includes: - **Data Ingestion**: Implementing mechanisms to collect and preprocess data. - **Algorithm Development**: Creating or integrating algorithms that will drive the agent's decision-making. - **Model Training**: Training machine learning models if applicable. - **Integration**: Ensuring seamless integration with other systems and protocols. **5. Implement Amp Protocols** Integrate Amp protocols into your agent to leverage its advanced capabilities. This might involve: - **Protocol Implementation**: Writing code to adhere to Amp standards. - **Communication**: Ensuring the agent can communicate effectively with other Amp-compatible systems. - **Security**: Implementing security measures to protect data and communications. **6. Testing and Validation** Thoroughly test
Part : Everything as Code: How We Manage Our Company In One Monorepo At Kasava, we've embraced the concept of "everything as code" to streamline our operations and ensure consistency across our projects. This approach allows us to manage our entire company within a single monorepo, providing a unified source of truth for all our configurations, infrastructure, and applications. **Why a Monorepo?** A monorepo offers several advantages: 1. **Unified Configuration**: All our settings, from development environments to production, are stored in one place. This makes it easier to maintain consistency and reduces the risk of configuration drift. 2. **Simplified Dependency Management**: With all our code in one repository, managing dependencies becomes more straightforward. We can easily track which versions of libraries and tools are being used across different projects. 3. **Enhanced Collaboration**: A single repository fosters better collaboration among team members. Everyone has access to the same codebase, making it easier to share knowledge and work together on projects. 4. **Consistent Build and Deployment Processes**: By standardizing our build and deployment processes, we ensure that all our applications follow the same best practices. This leads to more reliable and predictable deployments. **Our Monorepo Structure** Our monorepo is organized into several key directories: - **/config**: Contains all configuration files for various environments, including development, staging, and production. - **/infrastructure**: Houses the infrastructure as code (IaC) scripts for provisioning and managing our cloud resources. - **/apps**: Includes all our applications, both internal tools and customer-facing products. - **/lib**: Stores reusable libraries and modules that can be shared across different projects. - **/scripts**: Contains utility scripts for automating various tasks, such as data migrations and backups. **Tools and Technologies** To manage our monorepo effectively, we use a combination of tools and technologies: - **Version Control**: Git is our primary version control system, and we use GitHub for hosting our repositories. - **Continuous Integration/Continuous Deployment (CI/CD)**: We employ Jenkins for automating our build, test, and deployment processes. - **Infrastructure as Code (IaC)**: Terraform is our tool of choice for managing cloud infrastructure. - **Configuration Management**: Ansible is used for configuring and managing our servers and applications. - **Monitoring and Logging**: We use Prometheus and Grafana for monitoring,
Part : Introduction to the MCP Toolbox for Databases The MCP Toolbox for Databases is a comprehensive suite of tools designed to facilitate the management, optimization, and maintenance of databases. This toolbox is tailored to support a wide range of database management systems (DBMS), ensuring compatibility and efficiency across various platforms. Whether you are a database administrator, developer, or analyst, the MCP Toolbox provides a robust set of features to streamline your workflow and enhance productivity. Key Features: 1. **Database Management**: Easily create, modify, and delete databases and tables. The toolbox offers intuitive interfaces and powerful scripting capabilities to manage database schemas and objects efficiently. 2. **Performance Optimization**: Identify and resolve performance bottlenecks with advanced diagnostic tools. The MCP Toolbox includes performance monitoring and tuning features to ensure your databases run smoothly and efficiently. 3. **Backup and Recovery**: Implement reliable backup and recovery solutions to safeguard your data. The toolbox provides automated backup schedules and comprehensive recovery options to protect against data loss. 4. **Security Management**: Enhance database security with robust access control and encryption features. The MCP Toolbox helps you manage user permissions, audit logs, and secure data transmission. 5. **Data Integration**: Seamlessly integrate data from multiple sources and formats. The toolbox supports various data integration techniques, including ETL (Extract, Transform, Load) processes, to consolidate and analyze data effectively. 6. **Reporting and Analytics**: Generate insightful reports and perform in-depth data analysis. The MCP Toolbox offers advanced reporting tools and analytics capabilities to derive actionable insights from your data. 7. **Cross-Platform Compatibility**: Ensure compatibility with multiple DBMS platforms, including popular systems like Oracle, SQL Server, MySQL, and PostgreSQL. The toolbox is designed to work seamlessly across different environments. 8. **User-Friendly Interface**: Benefit from an intuitive and user-friendly interface that simplifies complex database tasks. The MCP Toolbox is designed with ease of use in mind, making it accessible to both novice and experienced users. The MCP Toolbox for Databases is an essential tool for anyone involved in database management. Its comprehensive features and cross-platform compatibility make it a valuable asset for optimizing database performance, ensuring data security, and enhancing overall productivity.
77% of employees paste company data into ChatGPT. Italy has already fined OpenAI 15 million euros. The European AI Act imposes new obligations from 2025. If you use artificial intelligence in your business, this article is for you.

The problem: using ChatGPT at work is a risk
#

Every day, millions of employees use ChatGPT to write emails, summarise documents, and generate reports. It seems harmless. But according to a 2025 report, 77% of employees paste company data into AI services like ChatGPT – and 82% do so with personal accounts, completely outside corporate control.

This phenomenon is called shadow AI: the unauthorised use of artificial intelligence tools in the workplace. And the damage can be enormous.

The Samsung case: source code leaked into ChatGPT
#

In 2023, three Samsung engineers pasted into ChatGPT:

  • Proprietary source code for semiconductors while searching for a bug
  • Confidential code to troubleshoot equipment issues
  • An entire recording of an internal meeting to generate minutes

The result: Samsung banned all generative AI tools on corporate devices and networks. They are not alone: JP Morgan, Goldman Sachs, Apple, Deutsche Bank, and Bank of America have done the same.


GDPR and ChatGPT: what your business risks
#

The 15 million euro fine in Italy
#

In December 2024, the Italian Data Protection Authority imposed a 15 million euro fine on OpenAI – the first significant penalty worldwide against a generative AI company. The violations cited:

  • No legal basis for processing personal data used to train ChatGPT
  • Failure to notify the data breach of March 2023
  • Inadequate privacy notice: only in English and too vague
  • No age verification to prevent access by minors

What happens when an employee pastes data into ChatGPT
#

Under the GDPR, the company remains liable – even if the employee acted without authorisation. Pasting personal data (of customers, employees, patients) into ChatGPT without a legal basis constitutes a GDPR violation by the data controller.

The possible consequences:

  • Mandatory notification to the supervisory authority within 72 hours (Art. 33 GDPR)
  • Fines up to 20 million euros or 4% of global annual turnover
  • Incalculable reputational damage

The problem of data transfers to the US
#

ChatGPT processes data on US-based infrastructure. The EU-US Data Privacy Framework was confirmed in 2025, but Max Schrems’ organisation NOYB has already announced a challenge before the Court of Justice of the EU. If the framework is struck down – as happened with the Privacy Shield in 2020 – every transfer of personal data to OpenAI could become potentially illegal.


AI Act: new obligations for European SMEs
#

The EU Artificial Intelligence Regulation (AI Act, Reg. EU 2024/1689) entered into force on 1 August 2024 with a phased rollout:

Date What changes
February 2025 Banned AI practices prohibited + AI literacy obligation for all
August 2025 Obligations for general-purpose AI models (GPAI)
August 2026 Full obligations for high-risk AI systems, transparency, human oversight

The AI literacy obligation is already in force
#

Since 2 February 2025, every company that uses AI tools must ensure its staff has a “sufficient level of AI literacy”. This applies to everyone – from a 5-person SME to a multinational corporation.

The risk is not the tool, but how you use it
#

The same ChatGPT can be:

  • Minimal risk: for drafting emails or brainstorming
  • High risk: for screening job candidates, assessing creditworthiness, or making decisions that affect people

If you use AI for decisions that impact individuals, heavy obligations kick in: impact assessments, human oversight, and registration in the EU database.

AI Act penalties
#

Violation Maximum penalty
Prohibited practices 35 million euros or 7% of turnover
High-risk systems 15 million euros or 3% of turnover
False information to authorities 7.5 million euros or 1% of turnover

For SMEs, the penalty is always calculated on the lower amount between the fixed figure and the turnover percentage. But even the lower figure can be significant.


The solution: private AI
#

Private AI solves the problem at its root: your data never leaves the corporate perimeter.

Instead of sending data to American servers, a private AI system runs on controlled infrastructure – European, on-premise, or in your certified provider’s data centre. Large language models (LLMs) run locally, documents stay where they are, and no data ends up training third-party models.

What changes with private AI
#

ChatGPT (cloud) Private AI
Where data goes US servers (OpenAI) Controlled infrastructure
Trained on your data Possible (free tier) Never
GDPR compliance Complex, risky Native
AI Act compliance User’s responsibility Built into the design
Extra-EU transfer Yes No
Access control Limited Full
Audit trail Partial Full

How PRISMA works: HTX’s private AI stack
#

At HTX, we built PRISMA – a private AI platform designed for businesses that handle sensitive data.

Where PRISMA operates
#

PRISMA can operate within the Data Centre of BIC Incubatori FVG, the certified incubator of the Friuli Venezia Giulia Region. Dedicated infrastructure, redundant connectivity, physical and logical security. For workloads requiring additional computing power, we rely on TriesteValley HPC, the high-performance computing cluster equipped with NVIDIA GPUs.

What PRISMA does
#

  • Enterprise AI chat: an AI assistant that answers solely based on your internal documents (RAG – Retrieval Augmented Generation)
  • Text-to-SQL (MANTA): ask your databases questions in natural language and get precise answers without writing code
  • AI classification (KOI): models trained on your data to classify, categorise, and decide – with full transparency and human oversight
  • No data leaves: everything stays on European infrastructure, under your control

Why Trieste
#

HTX operates in the scientific hub of Trieste – the European city with the highest density of researchers per capita (37 per 1,000 workers). In April 2025, the AGORAI Innovation Hub was launched here, a partnership between Generali and Google Cloud for AI. Our ecosystem includes SISSA, ICTP, the University of Trieste, Fincantieri, and illycaffe'.


What to do now: 5 steps for your business
#

  1. Audit the AI tools your employees are using – including unauthorised ones
  2. Classify each use by risk level according to the AI Act (minimal, limited, high)
  3. Start AI literacy training – it has been mandatory since February 2025
  4. Write a corporate AI policy that defines what is and is not allowed
  5. Evaluate a private AI solution that keeps your data under your control

Want to learn more?
#

If you want to understand how private AI can work for your business – without GDPR risks, without extra-EU transfers, without shadow AI – get in touch. We will respond within 24 hours.


This article was written by the team at HTX – Human Technology eXcellence. We design private artificial intelligence systems for healthcare and industry, from our data centre in Trieste.

Articoli Interessanti - This article is part of a series.
Part : This Article
Part : How to Build an Agent - Amp **Introduction** Building an agent, especially one that leverages the power of Amp, involves several key steps. Amp, which stands for Advanced Multi-Purpose Protocol, is a versatile framework designed to enhance the capabilities of agents in various domains. This guide will walk you through the process of creating an agent using Amp, from conceptualization to deployment. **1. Define the Purpose and Scope** Before diving into the technical details, it's crucial to define the purpose and scope of your agent. Ask yourself the following questions: - What specific tasks will the agent perform? - In what environments will the agent operate? - What are the key performance metrics for success? **2. Choose the Right Tools and Technologies** Selecting the appropriate tools and technologies is essential for building a robust agent. For an Amp-based agent, you might need: - **Programming Languages**: Python, Java, or C++ are commonly used. - **Development Frameworks**: TensorFlow, PyTorch, or custom frameworks compatible with Amp. - **Data Sources**: APIs, databases, or real-time data streams. - **Communication Protocols**: HTTP, WebSockets, or other protocols supported by Amp. **3. Design the Agent Architecture** The architecture of your agent will determine its efficiency and scalability. Consider the following components: - **Input Layer**: Handles data ingestion from various sources. - **Processing Layer**: Processes the data using algorithms and models. - **Output Layer**: Delivers the results to the end-users or other systems. - **Feedback Loop**: Allows the agent to learn and improve over time. **4. Develop the Core Functionality** With the architecture in place, start developing the core functionality of your agent. This includes: - **Data Ingestion**: Implementing mechanisms to collect and preprocess data. - **Algorithm Development**: Creating or integrating algorithms that will drive the agent's decision-making. - **Model Training**: Training machine learning models if applicable. - **Integration**: Ensuring seamless integration with other systems and protocols. **5. Implement Amp Protocols** Integrate Amp protocols into your agent to leverage its advanced capabilities. This might involve: - **Protocol Implementation**: Writing code to adhere to Amp standards. - **Communication**: Ensuring the agent can communicate effectively with other Amp-compatible systems. - **Security**: Implementing security measures to protect data and communications. **6. Testing and Validation** Thoroughly test
Part : Everything as Code: How We Manage Our Company In One Monorepo At Kasava, we've embraced the concept of "everything as code" to streamline our operations and ensure consistency across our projects. This approach allows us to manage our entire company within a single monorepo, providing a unified source of truth for all our configurations, infrastructure, and applications. **Why a Monorepo?** A monorepo offers several advantages: 1. **Unified Configuration**: All our settings, from development environments to production, are stored in one place. This makes it easier to maintain consistency and reduces the risk of configuration drift. 2. **Simplified Dependency Management**: With all our code in one repository, managing dependencies becomes more straightforward. We can easily track which versions of libraries and tools are being used across different projects. 3. **Enhanced Collaboration**: A single repository fosters better collaboration among team members. Everyone has access to the same codebase, making it easier to share knowledge and work together on projects. 4. **Consistent Build and Deployment Processes**: By standardizing our build and deployment processes, we ensure that all our applications follow the same best practices. This leads to more reliable and predictable deployments. **Our Monorepo Structure** Our monorepo is organized into several key directories: - **/config**: Contains all configuration files for various environments, including development, staging, and production. - **/infrastructure**: Houses the infrastructure as code (IaC) scripts for provisioning and managing our cloud resources. - **/apps**: Includes all our applications, both internal tools and customer-facing products. - **/lib**: Stores reusable libraries and modules that can be shared across different projects. - **/scripts**: Contains utility scripts for automating various tasks, such as data migrations and backups. **Tools and Technologies** To manage our monorepo effectively, we use a combination of tools and technologies: - **Version Control**: Git is our primary version control system, and we use GitHub for hosting our repositories. - **Continuous Integration/Continuous Deployment (CI/CD)**: We employ Jenkins for automating our build, test, and deployment processes. - **Infrastructure as Code (IaC)**: Terraform is our tool of choice for managing cloud infrastructure. - **Configuration Management**: Ansible is used for configuring and managing our servers and applications. - **Monitoring and Logging**: We use Prometheus and Grafana for monitoring,
Part : Introduction to the MCP Toolbox for Databases The MCP Toolbox for Databases is a comprehensive suite of tools designed to facilitate the management, optimization, and maintenance of databases. This toolbox is tailored to support a wide range of database management systems (DBMS), ensuring compatibility and efficiency across various platforms. Whether you are a database administrator, developer, or analyst, the MCP Toolbox provides a robust set of features to streamline your workflow and enhance productivity. Key Features: 1. **Database Management**: Easily create, modify, and delete databases and tables. The toolbox offers intuitive interfaces and powerful scripting capabilities to manage database schemas and objects efficiently. 2. **Performance Optimization**: Identify and resolve performance bottlenecks with advanced diagnostic tools. The MCP Toolbox includes performance monitoring and tuning features to ensure your databases run smoothly and efficiently. 3. **Backup and Recovery**: Implement reliable backup and recovery solutions to safeguard your data. The toolbox provides automated backup schedules and comprehensive recovery options to protect against data loss. 4. **Security Management**: Enhance database security with robust access control and encryption features. The MCP Toolbox helps you manage user permissions, audit logs, and secure data transmission. 5. **Data Integration**: Seamlessly integrate data from multiple sources and formats. The toolbox supports various data integration techniques, including ETL (Extract, Transform, Load) processes, to consolidate and analyze data effectively. 6. **Reporting and Analytics**: Generate insightful reports and perform in-depth data analysis. The MCP Toolbox offers advanced reporting tools and analytics capabilities to derive actionable insights from your data. 7. **Cross-Platform Compatibility**: Ensure compatibility with multiple DBMS platforms, including popular systems like Oracle, SQL Server, MySQL, and PostgreSQL. The toolbox is designed to work seamlessly across different environments. 8. **User-Friendly Interface**: Benefit from an intuitive and user-friendly interface that simplifies complex database tasks. The MCP Toolbox is designed with ease of use in mind, making it accessible to both novice and experienced users. The MCP Toolbox for Databases is an essential tool for anyone involved in database management. Its comprehensive features and cross-platform compatibility make it a valuable asset for optimizing database performance, ensuring data security, and enhancing overall productivity.