Source: PermaDAO
The AI on AO conference introduced three important technological breakthroughs: Web Assembly 64-bit support, WeaveDrive technology, and the integration of the Llama.cpp large language model inference engine. Additionally, two projects were highlighted: LlaMA Land and Apus Network. Let’s delve into all the details together.
On June 20th, the AI on AO conference concluded successfully. During the event, the AO protocol showcased three major technological updates, signaling the ability for smart contracts to run large language models in a decentralized environment, representing an exciting technological breakthrough.
In particular, AO’s key breakthroughs in AI technology include the following:
Web Assembly 64-bit support: Developers can now create applications with over 4GB of memory, with Web Assembly 64-bit theoretically supporting up to 16 exabytes (approximately 17 billion GB) of memory. Currently, AO can execute 16GB models, meaning that AO’s current 16GB memory level is sufficient to run almost all models in the current AI field. The expansion of memory capacity not only improves application performance, but also promotes development flexibility and technological innovation.
WeaveDrive technology: This technology simplifies the way developers access and manage data, allowing them to access Arweave data like a local hard drive and efficiently stream data to the execution environment, speeding up development and application performance.
Integration of the Llama.cpp large language model inference engine: By porting the Llama.cpp system, AO supports the direct execution of various open-source large language models in smart contracts, such as Llama 3 and GPT-2. This means that smart contracts can directly utilize advanced language models for complex data processing and decision-making (including financial decisions), greatly expanding the functionality of decentralized applications.
These three important technological breakthroughs create greater space for developers to build AI applications on AO. As an example, a completely AI-driven new project called Llama Land was introduced at the conference. Additionally, there is another decentralized GPU network project, Apus Network, which will provide the most cost-effective AI model execution environment for AI applications on AO in the future.
Llama Land is a large-scale online multiplayer game built on AO, creating a virtual world completely driven by AI (Llama 3 model) for users. Llama Land features a system called Llama Fed, similar to the Federal Reserve but run by the Llama model, responsible for monetary policy and the minting of Llama tokens.
Users can request Llama tokens by providing Arweave tokens (wAR), and Llama Fed will autonomously decide whether to grant the tokens based on the quality of the request (e.g., whether the project/proposal is interesting or valuable), with no human intervention in the entire process.
Currently, Llama Land is not fully open to the public, but interested users can visit its website and join the waitlist to experience it as soon as possible.
Apus Network is a decentralized, permissionless GPU network. It utilizes Arweave’s permanent storage and AO’s scalability, and through economic incentives, it provides a deterministic GPU execution environment for AI models. Specifically, Apus Network can provide an efficient, secure, and cost-effective computing environment for AI applications on AO, further driving the development of decentralized AI.
The Apus Network project recently updated its website to enhance user experience. Additionally, the development of model evaluation and model fine-tuning functionality is ongoing and has achieved interim results. In the future, Apus Network plans to support the AO ecosystem wallet and complete related development and testing in the Playground. Furthermore, it will expand and implement model evaluation functionality on the AO platform to further enhance its application capabilities and performance.
In summary, the AI on AO conference not only showcased AO’s ability to host various advanced AI models but also greatly promoted the development of decentralized AI applications. As an example project after this technical upgrade, Llama Land showcased the prototype of an autonomous AI agent application. As AI applications continue to develop, the AO ecosystem will introduce more GPU resources to accelerate the execution speed of large language models. Apus Network has also become the first decentralized GPU network to access AO.
In the future, AO will further increase memory limitations to support the execution of larger-scale AI models as per demand. Additionally, AO will continue to explore the possibility of building autonomous AI agents to further expand the application scenarios of decentralized finance and smart contracts.