Manus brings the dawn of AGI, and AI security is also worth pondering

Manus brings the dawn of AGI, and AI security is also worth pondering

Author: 0xResearcher

Manus achieved a SOTA (State-of-the-Art) score in the GAIA benchmark, showing that its performance outperformed Open AI's large models of the same level. In other words, it can independently complete complex tasks, such as cross-border business negotiations, which involve breaking down contract terms, predicting strategies, generating solutions, and even coordinating legal and finance teams. Compared with traditional systems, Manus has the advantage of dynamic object disassembly ability, cross-modal reasoning ability, and memory-enhancing learning ability. It can break down large tasks into hundreds of executable subtasks, process multiple types of data at the same time, and use reinforcement learning to continuously improve its decision-making efficiency and reduce error rates.

In addition to marvelling at the rapid development of technology, Manus has once again sparked disagreement in the circle about the evolution path of AI: will AGI dominate the world in the future, or will MAS be synergistically dominant?

It starts with Manus' design philosophy, which implies two possibilities:

One is the AGI path. By continuously improving the level of individual intelligence, it is close to the comprehensive decision-making ability of human beings.

There is also the MAS path. As a super-coordinator, command thousands of vertical agents to work together.

On the surface, we are discussing different paths, but in fact we are discussing the underlying contradiction of AI development: how should efficiency and security be balanced? The closer the monolithic intelligence is to AGI, the higher the risk of black-box decision-making. While multi-agent collaboration can spread risk, it can miss critical decision-making windows due to communication delays.

The evolution of Manus has invisibly magnified the inherent risks of AI development. For example, data privacy black holes: in medical scenarios, Manus needs real-time access to patient genomic data; During financial negotiations, it may touch the company's undisclosed financial information; For example, the algorithmic bias trap, in hiring negotiations, Manus gives below-average salary recommendations to candidates of a particular ethnicity; Nearly half of the terms of emerging industries are misjudged when legal contracts are reviewed. Another example is the adversarial attack vulnerability, where hackers implant specific voice frequencies to enable Manus to misjudge the opponent's offer range during negotiations.

We have to face a terrible pain point for AI systems: the smarter the system, the wider the attack surface.

However, security is a word that has been mentioned a lot in web3, and there are a variety of encryption methods derived from the framework of the impossible triangle of V (blockchain networks cannot achieve security, decentralisation, and scalability at the same time):

  • Zero Trust Security Model: The core idea of the Zero Trust security model is "trust no one, always verify", meaning that devices should not be trusted by default, regardless of whether they are on an internal network or not. This model emphasises strict authentication and authorisation for each access request to ensure system security.
  • Decentralized Identity (DID): DID is a set of identifier standards that enable entities to be identified in a verifiable and persistent manner without the need for a centralised registry. This enables a new model of decentralised digital identity, often compared to self-sovereign identity, which is an essential part of Web3.
  • Fully Homomorphic Encryption (FHE) is an advanced encryption technology that allows arbitrary computation to be performed on encrypted data without decrypting it. This means that a third party can perform operations on the ciphertext, and the result obtained after decryption is the same as the result of the same operation on the plaintext. This feature is important for scenarios that require computation without exposing raw data, such as cloud computing and data outsourcing.

Zero trust security models and DIDs have a certain number of projects in multiple rounds of bull markets, and they have either succeeded or drowned in the wave of encryption, and as the youngest encryption method: Fully Homomorphic Encryption (FHE) is also a big killer to solve security problems in the AI era. Fully homomorphic encryption (FHE) is a technology that allows computation to be performed on encrypted data.

How to fix it?

First, the data level. All information entered by the user (including biometrics, voice tone) is processed in an encrypted state, and even Manus itself cannot decrypt the original data. For example, in a medical diagnosis case, the patient's genomic data is analysed in ciphertext throughout the process to avoid the leakage of biological information.

Algorithmic level. The "cryptographic model training" achieved through FHE makes it impossible for developers to peek into the decision-making path of AI.

At the level of synergy. Multiple agent communications are encrypted at the threshold, and a single node can be breached without causing global data leakage. Even in supply chain attack and defence drills, attackers infiltrate multiple agents and fail to gain a complete view of the business.

Due to technical limitations, web3 security may not be directly related to most users, but it is inextricably linked to indirect interests, and in this dark forest, if you don't do your best to arm, you will never escape the identity of "leeks".

  • Launched on the Ethereum mainnet in 2017, uPort was probably the first decentralised identity (DID) project to be released on mainnet.
  • In terms of zero trust security model, NKN released its mainnet in 2019.
  • Mind Network is the first FHE project to be launched on the mainnet, and has taken the lead in cooperating with ZAMA, Google, DeepSeek, etc.

uPort and NKN are already projects that I have never heard of, and it seems that security projects are really not being paid attention to by speculators, so let's wait and see if Mind network can escape this curse and become a leader in the security field.

The future is here. The closer AI is to human intelligence, the more it needs non-human defences. The value of FHE is not only to solve today's problems, but also to pave the way for the era of strong AI. On this treacherous road to AGI, FHE is not an option, but a necessity for survival.

Show original
The content on this page is provided by third parties. Unless otherwise stated, OKX is not the author of the cited article(s) and does not claim any copyright in the materials. The content is provided for informational purposes only and does not represent the views of OKX. It is not intended to be an endorsement of any kind and should not be considered investment advice or a solicitation to buy or sell digital assets. To the extent generative AI is utilized to provide summaries or other information, such AI generated content may be inaccurate or inconsistent. Please read the linked article for more details and information. OKX is not responsible for content hosted on third party sites. Digital asset holdings, including stablecoins and NFTs, involve a high degree of risk and can fluctuate greatly. You should carefully consider whether trading or holding digital assets is suitable for you in light of your financial condition.