Why AI can become a threat to AI?
Security and compliance in multi-agent systems.

In multi-agent systems (MAS) and composite AI, several artificial intelligences (AI) work together to better manage complex tasks in cooperation. But teamwork among AIs is not without consequences for security and compliance, because one AI can pose a risk to the other AI. We provide an overview.

Gartner market researchers see two sides of the generative AI movement on the way to greater progress: innovations that will be promoted by GenAI and innovations that will promote the progress of Generative AI itself.

In this way, cooperation between different artificial intelligences, like with humans, can lead to different strengths and competencies coming together and complementing each other. Gartner counts two forms of AI cooperation among the most important AI innovations of the coming years:

  • Composite AI refers to the combined application (or fusion) of different AI techniques to improve learning efficiency in order to expand the level of knowledge representation. According to the analyst house, it solves a wider range of business problems more effectively.
  • Multiagent Systems (MAS) are a type of AI system that consists of several independent (but interactive) agents, each of which is able to perceive its environment and take actions.

Both developments can help on the way to Artificial General Intelligence (AGI). Application scenarios with multiple agents are also very helpful in highly collaborative work, such as in development in the software industry, intelligent production and company management, explains Microsoft, for example . But these AI innovations not only have advantages, they also bring with them many challenges.

AI teamwork needs order and harmony

If AI agents are to work together, similar challenges can arise as with humans: The interaction must be coordinated so that everyone does their job and the order of tasks is maintained. The agents as team members must communicate with each other and exchange their status. Possible conflicts of objectives within the team must be identified and eliminated in good time.

A group of researchers at MIT described it this way : Multiple language models (LLMs) that work together harmoniously are better than one. But the question is how the necessary harmony can be ensured.

Multi-AI also means multiple and new risks.

When you think about the risks of using AI, you immediately see a long list. Potential weaknesses and deficiencies can exist in any of the AI ​​systems that are supposed to work together as a team. But it’s not just about the multiplication of risks, new ones are also emerging.

The results of one AI can train and influence the other, which is how it should be so that they can learn together and from each other. But if, for example, an AI has been manipulated with “poisoned data”, then the poisoning spreads to the other AIs. For such an attack (data poisoning), it is enough if one AI in the group is vulnerable; the others then become “infected” later.

Depending on the interdependence of the AI ​​agents, the failure of one agent can block the others and even cause them to fail as well. As in a supply chain, an incident spreads to the other agents.

Confidentiality can also be indirectly affected: information that one AI agent only reveals to the other agent is then “blabbed” by the other agent to third parties. Obviously, the security, compliance, trustworthiness, and reliability of an AI then depend on the entire AI team. The transparency, explainability, and data protection of an AI system also suffer if the MAS or the Composite AI has deficiencies in these areas.

The path to multi-AI security

As nice as harmonious and trusting teamwork is, with AI agents “distrust” should be the rule, or more precisely, zero trust , i.e. no unchecked initial trust. Before one AI agent reacts to another or passes data on to it, it should be clear what risks are involved, for example what security status the other AI is in.

However, since it is difficult and time-consuming to continuously assess the security status of an AI, it will probably be an AI that monitors and controls the other AI agents. One could also say that a MAS and a composite AI should always include a guardian AI that keeps an eye on the AI ​​team.

The guard AI’s testing is just as central and important as the audit of security solutions. Just as a password manager, for example, has to be particularly secure in order not to put all passwords at risk, the same applies to the guard AI. Multi-AI security therefore requires guards in the team.

LEAVE A REPLY

Please enter your comment!
Please enter your name here