Using PAI3 Network distributed computing for AI resources
Last updated
Last updated
PAI3 users can submit computer requests for queries and prompts such as “Find me the best flight to Paris” to an AI model to develop an inference indicated in the black arrows. The prompt is augmented with further context security and utility guardrails by the RAGS on the Node. The PAI3 Agent identifies the best available node to compute the inference, the best AI model and vector embeddings via the vector index on the blockchain. Each computing node’s AI model produces an inference, and the resulting vector embedding is returned to the requesting node. Every operation will be conducted on multiple nodes in parallel to achieve consensus and accuracy. Resulting AI inferences and vector embeddings from each compute request are returned to the original requesters Public IPFS vault. Additionally, the Agent will manage the decentralized data storage of user files, summaries, and vector packages on the PAI3 hardware and the IPFS.
Furthermore, operations such as AI model tuning may require supplemental computing resources beyond what is available on the local computer. With the PAI3 Network, AI tuners can request additional nodes for greater compute power. The vector embeddings are returned to their IPFS vault indicated with arrow. By combining multiple nodes and utilizing parallel computing, PAI3 will offer a cheaper and more trusted environment for AI tuning. This distributed compute offers an opportunity for PAI3 contributors to earn revenue while their computer is being used in the PAI3 Network.