Skip to content

AI & DePIN

22 Topics 26 Posts
  • 0 Votes
    1 Posts
    16 Views
    TypoX_AI ModT

    Meet Our New Talented Potters (1).png

    Notcoin大火之後,大量的Ton生態項目爆發式增長,特別是足夠簡單的Tap2Earn設計模型,配合Telegram本身龐大的用戶基數,使得Notcoin迅速出圈,完成用戶的早期教育、onboard和社交傳播。在結合Notcoin啟動優勢和獲取長期可持續發展的經濟飛輪這兩個關鍵要素之後,TypoX AI基於Ton推出的TypoCurator成為了Ton生態中最值得持續關注的AI賽道項目。

    TypoCurator項目介紹

    TypoCurator是TypoX AI推出的基於Ton的AI數據標註小程序,有著極低的上手使用門檻。在小程序中,玩家只需根據自己的判斷選擇問題的回答,或者對AI的回覆作出數值評判。這些回答和評判將作為標註數據應用於AI訓練,玩家通過這些行為換取TPX作為獎勵。

    TypoCurator不再是一個單純的Tap2Earn遊戲,而是結合了Tap2Earn的低操作門檻,同時將用戶的知識輸入轉化為有實際價值和意義的AI數據標註訓練集。在玩家進行知識問答的同時,還可以贏取TPX代幣獎勵。

    TypoX AI完成價值閉環

    TypoCurator Protocol旨在開放網絡上建立一個高效的AI標註經濟模式,為AI數據標註提供了跨區域、跨文化和能力差異的標註力量,為構建開源AI模型提供社區化的數據集方案。另一方面,TypoCurator Protocol希望為世界各地的人群帶來AI原生的收益機會,共享AI創新帶來的紅利。通過TypoCurator完成的數據標註,一方面將直接應用於TypoX AI的Agent的能力提升,帶來更好的回答體驗;同時,TypoX AI團隊秉承開放原則,收集到的Web3回答數據集合也已經在HuggingFace上開源。自此,TypoX AI完成了價值閉環。
    image.png

    遊戲化設計

    TypoCurator小程序具有遊戲化設計,使用TypoCurator沒有高深的門檻,所有Telegram用戶都可以無縫銜接使用TypoCurator。用戶無需購買NFT或充值即可開始。整體輕鬆的界面引導用戶對相應問題作出選擇,這些用戶的選擇和回覆將作為訓練AI數據集的一部分。用戶通過邀請好友、提高標註質量等行為,可以獲取TPX作為獎勵。

    對於有意願作出更多共享的用戶,可以通過TPX提升每日體力值,換取更多的數據標註機會,從而獲得更大的TPX獎勵。

    AI訓練進階 — — TypoCurator攻略

    6月12日開始公測的TypoCurator已經收穫了超過5萬用戶,完成了180萬次數據標註,累計發放了45000TPX。下面分享如何開始你的TypoCurator之旅:

    第1步:在Telegram搜索或通過連結直達TypoCurator https://t.me/typocurator_bot

    第2步:點擊“Sign in”以喚起TON錢包並登錄。
    d7981d7a-7f49-4797-b462-07e5fac38222-image.png

    第3步:點擊“Start Training”開始答題,以開始標註數據,訓練AI;點擊左上角頭像和用戶名,可以打開用戶個人主頁。
    63f4c86c-54f3-4d47-970d-f4187a2f454f-image.png

    第4步:開始你的Web3答題/學習之旅;每輪共20道題目,完成後即可查看本輪答題獲得的TPX獎勵。
    28e82560-e8cd-41c1-86ff-54e86ec36f9a-image.png

    第5步:在用戶主頁查看賬戶裡的獎勵總額;滿足提現門檻後,點擊“Withdraw”便可提出TPX獎勵到鏈上。
    39a73168-a165-48a7-830a-a30dfcc2321d-image.png

  • TypoCurator 数据集

    4
    0 Votes
    4 Posts
    52 Views
    J

    支持一下,希望项目越做越好

  • The Development Journey of TypoX AI

    1
    0 Votes
    1 Posts
    47 Views
    TypoX_AI ModT
    1. Embracing Randomness

    Creating an AI product begins with understanding the essential tasks involved. A primary focus is controlling the randomness of a large language model (LLM). Randomness, a double-edged sword, can either enhance the model’s generalization ability or lead to hallucinations. In creative contexts, where precision is less critical, randomness fuels imaginative outputs. Many users are initially captivated by the generative capabilities of large models. However, in domains requiring strict logic, such as factual data and mathematical code, precision is paramount, and hallucinations are more prevalent.

    Early optimizations in mathematics and coding involved integrating external tools (e.g., Wolfram Alpha) and coding environments (e.g., Code Interpreter). Subsequent steps included intensive training and workflow enhancements, enabling the model to solve complex mathematical problems and generate practical code of substantial length.

    Regarding objective facts, large models can store some information within their parameters, yet predicting which knowledge points are well-integrated is challenging. Excessive emphasis on training accuracy often leads to overfitting, reducing the model’s generalization capability. Training data inevitably has a cutoff date, limiting the model’s ability to predict future events. Consequently, enhancing generative results with external knowledge bases has gained significant traction.

    2. Challenges and Prospects of RAG 2.1 What is RAG?

    Retrieval Augmented Generation (RAG), proposed as early as 2020, has seen wider application with the recent development of LLM products. Mainstream RAG processes do not modify the model’s parameters but enhance input to improve output quality. Using a simple formula y = f(x) : f is the large model, a parameter-rich (random nonlinear) function; x is the input; y is the output. RAG focuses on optimizing x to enhance y .

    RAG’s cost-efficiency and modularity, allowing model interchangeability, are significant advantages. By retrieving relevant content for the model rather than providing full texts, RAG reduces token usage, lowering costs and response times.

    2.2 Long Context Models

    Recent models supporting long contexts can process lengthy texts, overcoming the early limitation of short inputs. This development has led some to speculate that RAG may become less relevant. However, in needle-in-a-haystack tests (finding specific content within a long text), long-context models perform better than earlier versions but still struggle with mid-text content. Moreover, current tests are single-threaded, and the complexity of multi-threaded tests remains a challenge. Third-party testers have also raised concerns about potential training biases in some models.

    While improvements in long-context models are ongoing, their high training and usage costs remain barriers. Compared to inputting entire texts, RAG maintains advantages in cost and efficiency. Nevertheless, RAG must evolve with model advancements, particularly regarding text processing granularity, to avoid over-processing that diminishes overall efficiency. Product development always involves balancing cost, timeliness, and quality.

    2.3 The Vision for RAG

    Moving beyond early RAG applications reliant on vector storage and retrieval, a systematic approach to RAG framework construction is necessary. Content, retrieval methods, and generators (LLMs) are the fundamental elements of a RAG system.

    Optimizing retrievers has been extensively explored due to their convenience. Beyond vector databases, other data structures like key-value, relational, or graph databases should be considered based on data structure. Adjusting retrievers accordingly can further enhance retrieval aggregation.

    LLMs can play a crucial role in content preprocessing. Rather than simple slicing, vectorization, and retrieval, using classification models and LLMs for summarization and reconstruction can store content in more LLM-friendly formats. This approach relies more on semantics than vector distances, significantly improving generation quality. The TypoX AI team has validated this method in product implementation.

    Optimizing multiple elements simultaneously, especially integrating retrieval with generation model training, represents a significant direction for RAG. This integration can enhance both retrieval and generation quality. Some argue that advancements in LLM capabilities and lower-cost fine-tuning will render RAG obsolete. However, enhancing f (the LLM’s parameters) and x (the input) are complementary strategies. Advanced RAG also involves model fine-tuning for specific knowledge bases, extending beyond input enhancement. RAG’s value lies in superior generative results compared to raw LLMs, better timeliness and efficiency than fine-tuned models, and lower configuration costs.

    3. Human and AI Interaction 3.1 Positioning of Large Models

    The author opposes the notion that large models alone can meet all user needs. While powerful, large models alone do not suffice to create fully functional agents. Custom-trained or fine-tuned smaller models showcase a team’s capability but may not always enhance user experience. Investing resources in other elements (tools, knowledge bases, workflows) may yield better results. In the Web3 industry, lacking proprietary training data and evaluation standards, the focus should be on developing industry-specific databases and standards, not merely ranking universal (small) models.

    Underestimating large models is also a mistake. Early TypoX AI product exploration involved low trust in models, leading to overdeveloped processes. Balancing hard logic with LLM involvement, we achieved an optimal balance, addressing issues like increased costs and slower response times due to quality assurance measures (e.g., LLM self-reflection).

    3.2 The Value of Humans

    Advancements in AI capabilities highlight human value more concretely. In human-computer interactions, humans remain the most efficient agents, knowing their needs best. AI can self-reflect, but human decision-making aligns more closely with user requirements. Prioritizing accuracy over response time may not always be wise, as some decisions are better made by users, who should retain primary decision-making authority.

    Ignoring human presence and overemphasizing model performance neglects user experience and the efficiency of human-computer collaboration. Transforming product users into co-creators aligns with the decentralized spirit of the crypto community. From preferences for knowledge bases and tools to prompt and workflow exploration, and the corpus for model training and fine-tuning, all components of DAgent rely on community participation. The TypoX AI team has initiated mechanisms for user collaboration, gradually opening up to the community, ensuring every TypoX AI user, regardless of hardware limitations, can participate in the DAgent ecosystem construction.

  • TypoCurator Dataset

    1
    0 Votes
    1 Posts
    40 Views
    R
    Dataset Address

    https://huggingface.co/datasets/typox-ai/Typo_Intent_OS

    Dataset Description

    3383 pairs of questions and answers about the Web3 knowledge base, used for training and testing AI models applied to Web3 scenarios.

    Data Format prompt: AI-generated question. completion: The candidate answer selected by the majority of users. Example: { "prompt": "What is a primary advantage of using a decentralized finance (DeFi) platform?", "completion": "Direct peer-to-peer transactions without intermediaries." } Data Source

    This dataset contains AI-generated questions and multiple candidate answers. The correct answers are selected by our Web3 product's end-users based on what they consider the most accurate. The answer chosen by the most users is marked as the completion. To ensure the quality of these evaluations, we use an incentive mechanism to encourage sincere responses. Additionally, we include some seed questions with known answers to filter users. Only those who perform well on these seed questions have their choices counted.

    Annotation Method (Brief) Generation: Use TypoX to generate questions and candidate answers.
    TypoX is a Rag system with a Web3 knowledge base, TypoX. https://www.typox.ai/ Evaluation: User selections are conducted through the TypoCurator telegram Mini-app. https://t.me/typocurator_bot
    img.png Participation: Each question is evaluated by at least 300 people. Majority Rule: An option must be selected by more than 75% of participants to be considered the completion. Re-evaluation: If no option reaches 75%, the question is re-evaluated until an option reaches 80%. Invalid Questions: If a question is answered by more than 1000 people without any option reaching 75%, it is marked as invalid. Quality Assurance: We preset 500 seed questions with known answers to filter users. Only users who perform well on these questions have their choices counted. Statistics: On average, each question is evaluated by 453 people, with the completion option having an average selection rate of 78.9%. The dataset has also undergone two rounds of internal review. Chinese version

    https://gov.typox.ai/topic/90/typocurator-数据集

  • 从 For Web3 到 By Web3 —— AI发展应该如何去中心化

    1
    0 Votes
    1 Posts
    19 Views
    BernardB
    一、For Web3 1.1 More 应用(Agent) or 通用模型

    通用大模型虽然在不断发展,但是离真正满足用户的需求仍然有不小的距离。

    AI应用的最小单元是Agent,构建一个Agent需要给LLM再组装上额外的知识库、工具以及相应的指示。LLM只是决定了Agent的下限,LLM可以使用的组件才决定了其上限。我们需要更多深度定制的Agent(而不是简单的GPTs)来满足用户的需求,进而通过多Agent的协调合作来实现更复杂的应用场景。

    通用大模型的训练过程,就是在追求全局最优——多种能力(生成、编程、推理、数学)混合后的最优状态。所以这并不意味着它在局部上就一定优于一些针对性优化的模型。这也是一些MOE(混合专家)模型可以用更小的规模表现出等效于更大规模模型的原理。

    训练一个小规模的LLM,固然能展示团队的实力,但是未必就能满足实际的场景。不过这也不是团队意愿的问题,训练数据的匮乏限制了训练/微调一个针对性模型的可能性。在确定聚焦的场景后,积累和挑选数据也不是立竿见影的,需要真实用户的交互和反馈。

    1.2 TypoX AI

    所以我们打造了TypoX AI这个平台

    让用户能够在AI的协助下更好的进行调研,大大降低了调研门槛,让DYOR不再是一句空话。

    为了实现这一目标,我们建构了专属的RAG(检索增强生成)框架,为LLM配置了Web3知识库和实时检索工具,可以认为我们打造了一款服务于Web3行业的Perplexity AI。

    在RAG的加持下,我们总能得到质量优于原生LLM的输出,这恰好满足了我们进一步训练和微调模型的数据需要(训练/微调模型总是需要质量更高的问答对)。同时这些真实的交互数据也体现出了用户的偏好,使得模型能够更贴近社区的需求。

    二、By Web3 2.1 AI发展到底应该怎么去中心化?

    如果LLM模式的AI发展到极致,那最后留给人发挥的空间是什么?
    至少需求还是由人提出的。实际上人将长期作为需求方存在,因为目前的生成式模型还是没有自主意识的,依然需要由人类来提出问题。

    更一般地来说,模型依然是通过从Experience中学习,来提升在具体Task上的Performance。即使当前的LLM的能力相对于最初的生成Task早已溢出,但是对其溢出的能力的优化依然脱离不了这一框架:
    提出问题: 确立Task,积累针对于该Task的数据(用于训练和测试);
    解决问题: 在数据上训练,优化模型在具体Task上的表现,从而完成新的工作。

    I. 算力去中心化的发展受限

    物理规律限制了模型训练和部署(解决问题)的去中心化路径。通信成本是无法克服的瓶颈,相较于集中的算力集群,显卡网络更像是一座座孤岛。 小规模的显卡群基本只能部署生图模型和10B级别的LLM,模型训练上的差距只会更大。这不仅是效率问题,更是能力问题。 AI发展在一定时间内可能都无法摆脱对于集中化高质量算力的依赖。

    II. 需求(评价&数据)的去中心化

    很多人聚焦于硬件问题,却忽视了数据的重要性,实际上对于训练数据的规模和质量只会逐步提升。

    本轮AI的发展是基于语言模型的 Scaling Law,即模型规模越大,能力越强。更大的模型,就需要更多的数据用于训练; 一个模型是无法从其本身生成的内容中学习更多的。 小规模的LLM可以使用GPT4等更大体量模型生成的数据来进行训练/微调,而GPT4级别的头部模型需要的数据质量只会更高。

    谁掌握了数据,谁就掌握了AI发展的话语权。目前AI发展话语权是由少部分行业精英掌握的,这是通过树立评估标准、建立训练数据库来实现的。如果不能提出新的需求,那等来的将是更多打榜的通用型模型,用户所真正需要的场景将无法得到解决。

    是时候由用户来提出自己对于AI发展的需求了。为了实现这一目标,我们需要建立用户自己的评价体系,进而构建专属的训练数据库,以此来获得对于AI发展方向的话语权。

    2.2 TypoCurator

    目前对于通用大模型有一些评价手段,但是针对具体AI应用的评价手段还很匮乏。针对于一些有明确答案标准的场景,比如意图识别场景(用户的需求应该调用哪一个DApp/DAgent),需要建构对应的标签数据库,以此来进行针对性的评价和优化。

    由此我们最新推出了TypoCurator这个产品,借助Ton生态广泛的用户群体,通过社区分工的形式来构建聚焦于Web3场景的标签数据库,为我们之后训练Web3 AI OS做数据准备。对于普通用户而言,只要完成日常答题/标注任务,即可获得 $TPX 奖励。

    2.3 Agent Arena (Soon)

    除了标签数据之外,有大量的应用场景是没有明确标准的。比如根据项目信息生成调研报告,在保证事实清晰的基础上,依然会存在关注重点、行文逻辑和风格等问题。这些需要用户来进行对比选择。

    受到LMSYS Arena的启发,我们即将推出Agent Arena。用户将在不知道Agent信息的情况下进行对比选择,帮助我们实现对Agent和模型的评估。早期将聚焦于Web3的行业场景,用户的评价将帮助我们更好地优化TypoX AI,从而更好地服务于Web3行业。

    我们在LMSYS Arena的基础上,将评估对象从通用模型进一步推广到Agent,同时对于机制进行了进一步优化,降低了作弊和偏见风险。我们希望为AI开发者提供公允的评价体系和参考,以及定制化的AB测试流程,从而促进整个AI应用生态的发展。

  • 打造TypoX AI的心路历程

    1
    0 Votes
    1 Posts
    25 Views
    BernardB
    一、拥抱随机性

    要打造一款AI产品,首先应该知道要做的工作是什么。简单来说,就是控制LLM的随机性。随机性是一把双刃剑,用好了叫能力泛化,用不好就成了幻觉。在创作领域,对于精确性的容忍度高,随机性促成了创作的天马行空,相信大部分用户最初被大模型震撼的都是其创作生成能力。但是在面对事实和数学代码的严密逻辑时,就只有对错,没有模糊空间,所以幻觉大多出现在这种情形。

    在数学和代码方面,较早的优化是给LLM配置外部工具(如Wolfram Alpha)、编译环境(Code Interpreter),随后是进一步的加强训练,进一步在一些工作流的加持下,可以解决相对复杂的数学问题,完成一定长度的实用代码。

    在客观事实方面,大模型确实是能将部分信息存储在模型参数中,但是很难预知或者预设哪些知识点是必然拟合好的,而一味提高在训练上的准确性往往会导致过拟合,减弱泛化能力;同时大模型的训练数据总是有截止日期的,即使针对所有训练数据的问答都完美匹配,也无法预测出之后发生的事,况且大模型的训练成本决定了模型的更新不会过于频繁。所以通过使用外置的知识库增强生成结果才得到了广泛的重视。

    二、RAG的挑战与愿景 2.1 什么是RAG

    检索增强生成(Retrieval Augmented Generation, 即RAG) 早在2020年就被提出了,也是随着近期LLM产品的发展需求,才获得了比较广泛的应用。目前主流的RAG流程不涉及对模型本身参数的修改,是通过增强给模型的输入来提高生成内容的质量。用一个简单公式 y=f(x) 来解释一下:f 是大模型,一个参数量很大的(随机非线性)函数;x 就是输入项;y 就是输出项;RAG就是在 x 上下功夫,来提高 y 的质量。

    因为不涉及模型本身,所以RAG在成本上有很大优势,组合性也高,可以随意替换模型使用。只需检索出最相关的内容,提供给大模型,相较于直接全文输入,节省了所需Token数量,除了降低使用成本外,也大大压缩了回复用户的反应时间。

    2.2 长上下文模型

    近阶段出现了很多支持长上下文(输入窗口)模型,可以直接将完整的长篇巨著直接丢给大模型来处理,算是克服了大模型早期输入短的限制,这也是最初RAG的诱因之一,许多人认为RAG也许会就此势微。

    在大海捞针测试(让大模型在一段长文本中找到对应的内容)中,长上下文模型确实比之前的模型有更好的表现,但是相对于开头和结束部位,在处理文本中间的针时的表现有些差强人意;同时大海捞针测试本身也是有瑕疵的,目前公开的测试主要还是单针的(即要寻找的线索只有一条),从单针到多针,问题复杂度会急剧提升,对大模型而言依然是挑战,另外也有第三方测试者发现有些模型可能在训练时就已经得知需要找的针是什么了(有作弊嫌疑)。新模型在长上下文的表现上可能确实有提升,但是鲁棒性可能没有预期中的那么高。

    当然大模型在长上下文上还会继续优化,目前的瑕疵可能过段时间也许不是问题了,但是成本对于训练者和使用者来说依然是较高的门槛。相较于直接粗暴得将所有文本一股脑甩给模型,RAG在成本和时效性上依然有不可替代的优势。但RAG也需要根据模型的进步进行调整,特别是文本处理的颗粒度,避免出现过度处理反而降低整体效率的情况。在具体的产品中,我们总是需要在成本、时效和质量之间做权衡。

    2.3 RAG的愿景

    相较于简单使用向量库存储召回的早期RAG应用,我们需要从更系统的角度来审视和构建新的RAG框架,也许我们可以从RAG的持续研究中获得启发。内容、检索方式与检索器、生成器(LLM),是RAG系统的基本元素。

    针对检索器的优化相对丰富,同时也因为其便利性,是应用最早也最多的RAG方法。除此之外,应该更多地考虑内容本身,向量数据库未必就是唯一的选择,键值/关系型/图数据库也是可以考虑的方案,一切根据数据的结构来进行选择,同时相应的调整检索器就可以进一步增加检索聚合的效果。

    另外LLM也可以在内容预处理中扮演更重要的角色,相较于简单的切片-向量化-检索路径,使用分类模型和LLM进行提炼总结和重构,将内容以对LLM更友好的形式进行存储,同时更加依赖语义而不是向量距离来进行检索,将大幅提高生成的效果,TypoX AI团队在产品实现中也验证了这一点。

    除了单独优化其中某一元素,也可以同时优化多个元素,尤其是大部分RAG方法都是锁定(不涉及)生成模型的,将检索与生成模型一起训练,也将是RAG的一个很重要的方向。由此引出了另一个话题:RAG、微调和LLM的关系。每逢LLM本身能力的提升与更低成本的微调技术的出现,都会有人会说要用微调干掉RAG,RAG已死。

    回到前文提到的 y=f(x) 上,LLM和微调影响的都是 f 本身的参数,目的是在不加强输入x 的情况下,就能提升生成 y 的质量。而RAG则是通过加强 x 的输入,来提高 y 的质量。两种方案并不是矛盾的,而是相向而行的,模型的能力提升和输入的加强都有助于生成结果的优化。同时进阶的RAG也会涉及到模型对于知识库的针对性微调,将不再局限于对于输入的增强上。RAG的价值就在于,比原始LLM有更好的生成效果,(即使是与微调模型比,RAG的时效性和效果也是更好的,同时配置成本也不会高于微调的成本),以及与把全部知识直接丢给LLM相比,又有更高的效率。

    三、人与AI 3.1 大模型的定位

    笔者是一贯反对唯模型论模型极大论的,大模型是很强,但是单独的大模型距离用户真正需要的Agent还是有不少距离的。有独家训练的小模型或微调模型,固然是团队实力的展现,但是落地到产品上,就一定能够给用户提供更好的体验吗?也许将这一部分的资源投入到其他要素(工具、知识库、工作流)上,提升效果可能更明显。而且目前在Web3行业缺乏自己的训练数据和评价标准的情况下,我们行业真的需要更多的打榜通用(小)模型吗?笔者在此呼吁,加密行业需要构建专属的训练数据库和评价标准,进而构建更多更适配行业的专属模型。

    当然也不应该低估大模型的能力,在TypoX AI产品的早期探索中,为了使得流程更可控,我们给予了模型较低的信任,有很多流程目前来看都过度开发了。同时也存在为了保证生成质量,而导致成本增加,回复速度过慢的问题(比较典型的就像让LLM进行自反思)。我们在后续的迭代中,逐步优化流程中硬逻辑与LLM的参与度,达到了一个相对较优的平衡。

    3.2 人的价值

    AI能力的提升,其实反而更突显了人的价值,使得人的价值更具象化了。在人机交互中,人反而是最高效的Agent。只有人最知道自己要什么不要什么,人永远是甲方(需求方),AI固然可以进行自我反思,但是提升质量和准确性,结果也未必符合用户的需求。所以在产品的交互中,过于重视准确性,忽略时间延迟,未必是明智之举。有些决策过程更适合用户自己执行,应该把主要的决策权让渡给用户。

    唯模型论的另一方面,就是忽视了人的存在,一味追求模型的表现,忽视了人机交互中人的体验,以及人机协作对于整体流程的效率提升。将产品用户转变为共建者,这也是加密社区去中心化精神所在。 从对知识库、工具的偏好,提示词、工作流的挖掘,到模型训练和微调所需要的语料,DAgent的所有部件都离不开社区的参与。TypoX AI团队已经初步构建了一系列与用户共建的机制,后续将逐步向社区公布,让每一位TypoX AI用户无需考虑硬件门槛,都有机会参与到DAgent生态的建设之中。

  • Labelbox vs. AWS Ground Truth: A Comparative Analysis

    1
    0 Votes
    1 Posts
    15 Views
    R
    Labelbox

    Official Website: https://labelbox.com/

    Labelbox is currently one of the most mature annotation platforms, supporting various data types and annotation forms, including images, text, point clouds, maps, videos, and medical DICOM images. It offers a wide range of annotation management templates and provides services through AI or human-assisted annotation. Additionally, Labelbox supports operations through its Python SDK.
    248105c6-dcb3-4d96-bd16-a13bc129908e-image.png

    In terms of team collaboration, Labelbox provides a series of management tools. However, its interface is relatively traditional and somewhat cumbersome compared to the more modern interfaces of newer projects.

    3d24c42a-670b-4720-becd-c7c77133b93e-image.png

    Amazon SageMaker Ground Truth (AWS GT)

    Official Website: https://aws.amazon.com/cn/sagemaker/groundtruth/

    AWS GT is one of the most traditional annotation service products, offering both AWS-managed and self-service forms. The self-service option allows users to manage their data and the final testing process using AWS-provided annotation tools, while the managed service can provide AI and human annotation services.
    a7179884-46ee-47c5-b747-5aae59b5717c-image.png
    8146c33e-ff26-41ac-9487-1c4a4d2f7985-image.png
    In terms of team collaboration, AWS GT supports team collaboration and large-scale annotation project management through its tight integration with the AWS ecosystem. Users can use AWS IAM to control access permissions and resource management.

    Comparative Analysis

    Both Labelbox and AWS Ground Truth have their own advantages, and users can choose the appropriate platform based on their needs.
    Labelbox: Suitable for users who need highly customized and flexible annotation processes, particularly those in small to medium-sized enterprises and research institutions with high requirements for team collaboration and data quality.
    AWS Ground Truth: Suitable for users already using the AWS ecosystem, especially those large enterprises and projects that require large-scale data annotation and automated annotation features.

  • 1 Votes
    1 Posts
    30 Views
    pllgptP
    Podcast Highlight | Day1Global: Exploring the Opportunities and Development Paths in Web3 and AI Integration

    In this episode of Day1Global, Wenqing "Thomas" Yu dived into how Web3 and artificial intelligence (AI) come together, and the many opportunities and development paths this fusion brings. Here are the key points from their discussion:

    1️⃣ Enhanced Knowledge Graphs

    As the Web3 ecosystem gets more complex with new terms and concepts like Snapshot and Lens Protocol, using AI to build enhanced knowledge graphs is crucial. This helps newcomers learn and understand more easily, and provides developers and decision-makers with accurate data for informed decisions. This technology, highlighted in articles from Techopedia, is vital for improving the transparency and efficiency of the Web3 ecosystem.

    2️⃣ AI Simplifying Web3 Operations

    Operations like participating in ICOs, cryptocurrency trading, or using decentralized applications (dApps) in Web3 are often complex and cumbersome. AI simplifies these tasks through natural language processing and intelligent agent systems, turning them into intuitive commands and interactive interfaces. This lowers the barrier to entry and speeds up technology adoption. Articles on Medium explore how AI can be applied in various Web3 scenarios to enhance efficiency.

    3️⃣ Tokenomics Incentivizing Data Annotation and Model Training

    In Web3, using Tokenomics to incentivize users to participate in data annotation and model training improves the accuracy and efficiency of AI systems. This mechanism brings economic rewards and promotes the cutting-edge development of AI technology, collectively building an intelligent and reliable Web3 ecosystem. For more information, check out the discussion on the TypoX AI Forum.

    4️⃣ Agent-based and Vertical Intent Classification

    AI agents make services more intelligent and personalized, optimizing them based on user preferences and contextual information. This technology enhances user experience and reduces operational complexity, driving further development of Web3 applications. For detailed insights on agent-based applications in Web3, refer to the TypoX AI Forum.

    5️⃣ Importance of AI in Cross-Cultural Content Search

    In a global context, AI technology transcends language and cultural barriers, providing higher quality information and links. This promotes interaction and understanding among global communities, fostering cross-cultural exchange and cooperation. This topic is widely discussed in community dialogues, especially regarding AI's effectiveness in multilingual environments.

    6️⃣ Necessity of Training Enhanced Models for Web3 Verticals

    As Web3 technology advances, the demand for AI models specifically trained for its scenarios increases. Training these models requires high-quality data and specialized algorithm support, combined with Tokenomics incentive mechanisms, to attract active participation from the community and users, jointly driving technological innovation and development. To learn more about training enhanced models for Web3 verticals, refer to the in-depth discussion on the TypoX AI Forum.

    Transcript Summary: Exploring AI's Role in Web 3 Projects

    Event Date: June 13, 2024
    Duration: 1h 20min 20s

    Key Speakers and Their Insights Jeffrey Hu (Hashkey Capital) Role: Technology Director Timestamp: [00:05:20] Insight: Jeffrey highlighted the transformative impact of AI in research, particularly in analyzing project documents, comparing industry sectors, and gathering team background information. This not only saves time but also enhances accuracy. Expanded Quote: "Using AI, we can analyze project documents in-depth, compare them across various industry sectors, and efficiently gather comprehensive background information on teams. This approach saves a lot of time and significantly enhances the accuracy of our research. It allows us to focus more on critical analysis and less on the time-consuming collection of data." Pan Zhixiong (ChainFeeds) Role: Founder and Director Timestamp: [00:06:30] Insight: Pan shared his experience with AI in translation and research, emphasizing tools like ChainBuzz that automate content generation and summarize discussions efficiently. Expanded Quote: "AI has significantly improved our ability to translate and research. For instance, tools like ChainBuzz automate the generation of content and provide efficient summaries of discussions. This automation allows us to stay updated with the latest trends and insights without manually sifting through vast amounts of information. It streamlines our workflow and ensures we can focus on more strategic tasks." Wenqing "Thomas" Yu (TypoX AI) Role: Founder Timestamp: [00:08:45] Insight: Yuwen discussed Typo AI’s development and its focus on integrating AI infrastructure for Web 3 data. This approach aims to provide tools for better research and engagement in Web 3 projects by addressing specific user needs. Expanded Quote: "Typo AI integrates AI infrastructure specifically designed for Web 3 data, providing robust tools for more effective research and deeper engagement in Web 3 projects. By addressing specific user needs, we can enhance the user experience, making it easier for users to navigate complex data sets and derive meaningful insights. Our goal is to create a seamless and intuitive research tool that caters to the unique demands of the Web 3 ecosystem." ...... General Keynotes from the Discussion ...... AI as an Encyclopedia Speaker: Wenqing Yu Timestamp: [00:12:10] Insight: AI serves as an extensive knowledge base, aiding in summarizing white papers and legal documents effectively. Quote: "AI serves as an extensive knowledge base, much like an encyclopedia, which is invaluable in summarizing complex white papers and legal documents. This capability helps researchers quickly grasp the core ideas and essential information without wading through dense and lengthy texts. It ensures that critical insights are not overlooked and that users can make informed decisions more efficiently." Limitations of Current AI Speaker: Jeffrey Hu Timestamp: [00:18:55] Insight: Despite advancements, AI models like GPT-4 have limitations in context length and specific new knowledge, necessitating human intervention for accuracy. Quote: "Despite significant advancements, AI models such as GPT-4 still have notable limitations. These include constraints in context length, which can hinder comprehensive analysis of longer documents, and gaps in specific new knowledge areas that require frequent updates. Therefore, while AI can handle a significant portion of the research process, human intervention remains essential to ensure the accuracy and relevance of the insights derived from AI." Practical AI Applications Speaker: Jeffrey Hu Timestamp: [00:23:40] Insight: AI tools enhance research efficiency and accuracy by analyzing documents, summarizing industry sectors, and more. Quote: "From analyzing documents to summarizing industry sectors, AI tools have significantly enhanced both the efficiency and accuracy of our research. They allow us to perform detailed comparisons across various sectors quickly and reliably, ensuring that we have a solid foundation of information upon which to base our strategic decisions. This technological edge helps us stay competitive and informed in a rapidly evolving market landscape." Challenges and Improvements Speaker: Pan Zhixiong Timestamp: [00:35:15] Insight: Improving AI's logical reasoning and context handling will significantly enhance its effectiveness in research. Quote: "Improving AI's logical reasoning and context handling capabilities will significantly enhance its effectiveness in research. Currently, while AI can process large volumes of data, its ability to draw logical connections and handle complex contextual nuances is still developing. By advancing these aspects, we can make AI an even more powerful tool that not only processes information but also provides deeper, more insightful analysis and recommendations." Future Prospects Speaker: Wenqing Yu Timestamp: [00:50:30] Insight: The vision is a more powerful AI assistant that provides high-level insights and personalized user experiences. Quote: "The vision for the future is a more powerful AI assistant capable of providing high-level insights and highly personalized user experiences. This AI would not only support users in routine tasks but also act as a strategic advisor, helping them navigate complex decisions and uncover new opportunities. By tailoring its responses to individual needs and preferences, this advanced AI could revolutionize the way we interact with technology and data." User Engagement Speaker: Wenqing Yu Timestamp: [01:10:20] Insight: Engaging the community to help train AI models through incentivized processes can lead to better AI integration and functionality. Quote: "Engaging the community to help train AI models through incentivized processes can significantly enhance AI integration and functionality. By involving users in the training process, we can ensure that the AI learns from diverse perspectives and real-world scenarios. This collaborative approach not only improves the AI's performance but also fosters a sense of ownership and involvement among users, driving innovation and continuous improvement."

    These topics are not only about technological innovation but also about driving sustainable social and economic development through technology. The integration of Web3 and AI will continue to drive the future development of the digital economy and promote global cooperation and sharing. Let's look forward to how this technological fusion will shape our future world.

  • 1 Votes
    1 Posts
    28 Views
    pllgptP
    在這一期的Day1Global生而全球節目中,

    我們深入探討了Web3與人工智能(AI)的融合,以及這一融合為我們帶來的多方面機遇和發展路徑。以下是我們的重點討論:

    1️⃣ 增強的知識圖譜

    隨著Web3生態系統的複雜化,如Snapshot、Lens Protocol等新名詞和概念的涌現,利用AI構建增強的知識圖譜變得至關重要。這不僅幫助新手更容易學習和理解,還為開發者和決策者提供了更精確的數據支持,助力他們做出明智決策。這種技術的應用,例如在Techopedia的文章,對於提升Web3生態系統的透明度和效率至關重要。

    2️⃣ AI簡化Web3操作

    在Web3中,參與ICO、加密貨幣交易或使用去中心化應用(dApps)等操作常常複雜繁瑣。AI通過自然語言處理和智能代理系統,將這些操作簡化為直觀的指令和交互界面,降低了使用門檻,同時加速了技術的普及。這一概念的探索,特別是在Medium的文章,有助於理解如何在不同的Web3應用場景中應用AI以提高效率。

    3️⃣ Tokenomics激勵數據標註和模型訓練

    在Web3中,利用Tokenomics激勵用戶參與數據標註和模型訓練,提升了AI系統的準確性和效率。這種機制不僅帶來經濟回報,還促進了AI技術的前沿發展,共同建設智能和可靠的Web3生態系統。了解更多有關這一主題的信息,可以參考TypoX AI Forum上的討論。

    4️⃣ Agent化和垂直類型意圖分類

    AI的Agent化應用使得服務更加智能和個性化,能夠根據用戶的偏好和上下文信息進行優化。這種技術不僅提升了用戶體驗,還降低了操作的複雜性,推動了Web3應用的進一步發展。詳細了解Agent化應用在Web3中的應用,可以參考TypoX AI Forum上的連接。

    5️⃣ 跨文化內容搜索中的AI重要性

    在全球化的背景下,AI技術跨越語言和文化的障礙,提供了更高質量的信息和鏈接,促進了全球社區之間的互動和理解,推動了跨文化交流和合作的發展。這一點在社群對話中得到了廣泛討論,特別是AI如何應用於多語言環境中的成效。

    6️⃣ 訓練Web3垂直類型增強模型的必要性

    隨著Web3技術的進步,對專門針對其場景訓練的AI模型的需求日益增長。這些模型的訓練需要高質量的數據和專業的算法支持,並結合Tokenomics激勵機制,吸引社區和用戶的積極參與,共同推動技術的創新和發展。了解如何訓練Web3垂直類型增強模型,可以參考TypoX AI Forum上的深入討論。

    訪談摘要:探討人工智能在 Web 3 項目中的角色

    活動日期:2024年6月13日
    時長:1小時20分鐘20秒

    主要講者及其見解 Jeffrey Hu (Hashkey Capital)

    職位:技術總監
    時間戳:[00:05:20]
    見解:Jeffrey Hu強調了人工智能在研究中的變革性影響,特別是在分析項目文件、比較行業領域以及收集團隊背景信息方面。這不僅節省了時間,還提高了準確性。

    擴展引述: 「通過使用人工智能,我們可以深入分析項目文件,跨行業比較,並高效地收集團隊背景信息。這種方法大大節省了時間,顯著提高了我們研究的準確性。這讓我們能夠更多地專注於關鍵分析,而不必花費大量時間收集數據。」 潘智雄 (ChainFeeds)

    職位:創始人兼總監
    時間戳:[00:06:30]
    見解:潘智雄分享了他在翻譯和研究中使用人工智能的經驗,強調了像 ChainBuzz 這樣的工具自動生成內容並高效總結討論。

    擴展引述: 「人工智能大大改善了我們翻譯和研究的能力。例如,像 ChainBuzz 這樣的工具可以自動生成內容並高效地總結討論。這種自動化使我們能夠在不需要手動篩選大量信息的情況下,保持對最新趨勢和見解的了解。這簡化了我們的工作流程,確保我們能夠專注於更具戰略性的任務。」 Wenqing Yu (TypoX AI)

    職位:創始人
    時間戳:[00:08:45]
    見解:Wenqing Yu討論了 Typo AI 的開發及其專注於整合針對 Web 3 數據的人工智能基礎設施。這種方法旨在通過解決特定的用戶需求,為 Web 3 項目提供更好的研究和參與工具。

    擴展引述: 「Typo AI 整合了專門為 Web 3 數據設計的人工智能基礎設施,提供強大的工具以更有效地進行研究和更深入地參與 Web 3 項目。通過解決特定的用戶需求,我們可以提升用戶體驗,使用戶更容易導航複雜的數據集並獲得有意義的見解。我們的目標是創建一個無縫且直觀的研究工具,以滿足 Web 3 生態系統的獨特需求。」

    討論中的一般要點

    AI 作為百科全書

    講者:Wenqing Yu
    時間戳:[00:12:10]
    見解:人工智能作為廣泛的知識庫,有助於有效地總結白皮書和法律文件。

    引述: 「人工智能作為廣泛的知識庫,如同百科全書,在總結複雜的白皮書和法律文件方面無價。這種能力幫助研究人員快速掌握核心理念和重要信息,而無需翻閱冗長而密集的文本。這確保了關鍵見解不被忽視,並使用戶能夠更有效地做出明智的決策。」 當前人工智能的局限性

    講者:Jeffrey Hu
    時間戳:[00:18:55]
    見解:儘管取得了進步,像 GPT-4 這樣的人工智能模型在上下文長度和特定新知識方面仍然存在局限性,這需要人類介入以確保準確性。

    引述: 「儘管取得了顯著進步,像 GPT-4 這樣的人工智能模型仍有明顯的局限性。其中包括上下文長度的限制,這可能會妨礙對較長文件的全面分析,以及在特定新知識領域的空白需要經常更新。因此,雖然人工智能可以處理研究過程中的大部分工作,但人類的介入仍然是確保從人工智能中獲得的見解的準確性和相關性所必需的。」 實用的人工智能應用

    講者:Jeffrey Hu
    時間戳:[00:23:40]
    見解:人工智能工具通過分析文件、總結行業領域等,提高了研究的效率和準確性。

    引述: 「從分析文件到總結行業領域,人工智能工具顯著提高了我們研究的效率和準確性。它們使我們能夠快速可靠地跨行業進行詳細比較,確保我們擁有堅實的信息基礎來制定戰略決策。這種技術優勢幫助我們在快速變化的市場環境中保持競爭力和信息靈通。」 挑戰與改進

    講者:潘智雄
    時間戳:[00:35:15]
    見解:改進人工智能的邏輯推理和上下文處理能力將顯著提高其在研究中的有效性。

    引述: 「改進人工智能的邏輯推理和上下文處理能力將顯著提高其在研究中的有效性。目前,雖然人工智能可以處理大量數據,但其繪製邏輯聯繫和處理複雜上下文細微差別的能力仍在發展中。通過提升這些方面,我們可以使人工智能成為一個更強大的工具,不僅可以處理信息,還可以提供更深入、更有見地的分析和建議。」 未來展望

    講者:Wenqing Yu
    時間戳:[00:50:30]
    見解:未來的願景是一個更強大的人工智能助手,可以提供高層次的見解和個性化的用戶體驗。

    引述: 「未來的願景是一個更強大的人工智能助手,能夠提供高層次的見解和高度個性化的用戶體驗。這個人工智能不僅支持用戶完成日常任務,還能作為戰略顧問,幫助他們應對複雜的決策並發現新的機會。通過根據個人需求和偏好量身定制其響應,這個先進的人工智能可以徹底改變我們與技術和數據互動的方式。」 用戶參與

    講者:Wenqing Yu
    時間戳:[01:10:20]
    見解:通過激勵機制讓社群參與訓練人工智能模型,可以提高人工智能的整合和功能。

    引述: 「通過激勵機制讓社群參與訓練人工智能模型,可以顯著提高人工智能的整合和功能。通過讓用戶參與訓練過程,我們可以確保人工智能從多樣的視角和真實世界場景中學習。這種合作方式不僅改善了人工智能的性能,還促進了用戶的所有權感和參與感,推動創新和持續改進。」

    這些話題不僅僅關乎技術創新,更關乎如何通過技術驅動社會和經濟的可持續發展。Web3與AI的融合將持續推動未來數字經濟的發展,並促進全球範圍內的合作與共享。讓我們一起期待,這一技術融合將如何塑造我們的未來世界。

  • This topic is deleted!

    1
    0 Votes
    1 Posts
    7 Views
  • 0 Votes
    1 Posts
    60 Views
    R

    The major differentiation lies in our ability to leverage Web3 technology to incentivize human annotators using tokens. The advantages of Web3 include:

    Decentralized Storage and Processing: Achieving secure storage and sharing of data through decentralized means. Smart Contract Management: Ensuring transparent and fair task management and payment settlement through smart contracts.

    Traditional centralized annotation service platforms typically do not involve direct management of annotators by the service itself. Instead, the service providers manage the annotators centrally. This approach is beneficial for effectively managing the quality and timeliness of annotations. However, with the advent of LLMs, annotation tasks no longer need to rely entirely on human effort. In many cases, LLMs can perform preliminary annotations, which are then enhanced by humans, significantly reducing the difficulty of human annotation.

    Additionally, we can design verification mechanisms for annotations using LLMs and employ token incentives to ensure the quality of annotations performed by workers.

  • Labeling of LLM

    1
    0 Votes
    1 Posts
    43 Views
    R

    Currently, discussions about training LLMs primarily revolve around fine-tuning or LoRA (Low-Rank Adaptation). Pre-training LLMs is not within the scope of this discussion (as pre-trained LLMs usually do not require manual annotation). The current annotation methods can be classified into three main categories: scoring, ranking, and text annotation.

    Scoring:
    This involves rating the responses generated by the LLM. The most basic form is to rate the overall satisfaction with the response. However, beyond overall satisfaction, scoring can be detailed into several directions, such as:
    Accuracy: Correctness of the answer.
    Completeness: Whether the answer includes all necessary information.
    Relevance: Whether the answer is directly related to the question.
    Language Quality: Clarity and fluency of the language used.
    Creativity: Whether the answer offers unique insights or information.

    Ranking:
    This method involves having the LLM generate several responses and then manually ranking these responses. The ranking criteria can be overall satisfaction or reference the indicators mentioned in the scoring section. Combining scoring and ranking with Reinforcement Learning from Human Feedback (RLHF) annotation systems can be very effective.

    Text Annotation:
    This involves providing a query with one or several manually written answers or annotating several manually written replies within a context. This annotation method is the most basic approach for fine-tuning LLMs, but it is also labor-intensive and complex, usually employed for injecting knowledge into the LLM by human experts.

  • ChatGLM4 vs ChatGPT4o

    1
    0 Votes
    1 Posts
    57 Views
    R
    Effectiveness

    The overall effect of these two is almost on par, and GLM4 also supports multimodal input and output like GPT4o, but multimodal capability of GLM4 are inferior to GPT4o. GLM4 performs slightly better in a Chinese environment, with GLM4’s performance when asked questions in Chinese being slightly superior to GPT4o’s. When asked questions in English, GLM4’s performance is not as good as GPT4’s, but the difference is not significant. If we define the effectiveness score of GPT4o when asked questions in English as 5, then:

    f52741c5-2245-4307-9555-1d243b63ee33-image.png

    Political correctness: GLM4 relies on China’s political correctness, while GPT4o relies on Western political correctness.

    Price and Token Count:

    933891f3-9202-4bc9-93ff-dc9d59c55e7f-image.png

    The price is calculated based on the exchange rate of 1 US Dollar to 7.24 Chinese Yuan.
    Overall, GLM is slightly more expensive than GPT. Below is the various GLM models in detail.
    GLM-4-0520 is the best GLM4 model.
    GLM-4-Air has better performance than GPT3.5-turbo but is slightly slower.
    GLM-4-Airx has the same effect as GLM-4-Air but is 2.6 times faster, about the same speed as GPT3.5-turbo.
    GLM-4-Flash is only suitable for simple tasks.

    Ecosystem:
    The ecosystem of GLM4 is far inferior to that of GPT. As of (2024-6), langchain only supports up to GLM3, not to mention frameworks like CrewAI for Multi-Agent Systems.

    Open Source:
    Neither is open-sourced, but GLM3 is open-sourced.

  • Connecting CrewAI to Google's Gemini

    2
    0 Votes
    2 Posts
    71 Views
    M

    Great project

  • 0 Votes
    1 Posts
    19 Views
    R

    Crew AI is a powerful and easy-to-use multi-agent framework designed to simplify the AI development process and provide users with flexible tools and resources. In this series of articles, I will share insights and techniques I have gained while learning Crew AI, and demonstrate how to apply this knowledge to practical projects. This is not only a detailed Crew AI tutorial but also showcases its practical application through the development of a GameFi consultation system.

    There are three key elements in the Crew AI framework: Agent, Task, and Crew. An Agent is an entity that performs specific tasks, a Task is a set of instructions defining the actions of an Agent, and a Crew is the system that manages and coordinates the Agents. The entire development philosophy is highly personified, akin to you playing the role of a project manager, defining various employees (Agents), assigning them different tasks (Tasks), and ultimately creating a team (Crew) to fulfill the project's requirements for your clients.

    This chapter will briefly introduce these three elements and create a simple GameFi recommendation system.

    Agent
    An Agent is the basic operational unit in Crew AI, similar to a virtual character with specific duties and capabilities. In Crew AI, an Agent is defined by the following four basic elements, which are essentially a single natural language sentence in the code. Role: The role describes the identity and responsibilities of the Agent, defining its position and function within the system. The role determines how the Agent behaves and interacts with other Agents. Goal: The goal is the specific task or result the Agent needs to achieve. Each Agent has a clear goal that guides its behavior and decisions. Backstory: The backstory provides context and circumstances for the Agent, helping to define its behavior and decision-making style. The backstory can include the Agent's hypothetical experiences, knowledge domain, and working style. Tools: Tools are the specific resources or methods the Agent uses to perform tasks. Tools can be external APIs, data processing libraries, search engines, etc. Note that an Agent can still function without tools; they are not mandatory. Task
    A Task is the specific work an Agent needs to complete, defined in Crew AI by the following four elements. Description: The description is a detailed explanation of the task, clearly stating its purpose and content. Expected Output: The expected output defines the specific result or product required upon task completion. It sets clear standards for task completion, ensuring the Agent knows what kind of result to deliver. Tools: Tools are the specific resources or methods the Agent uses while performing the task. They are the same as the tools defined for an Agent, meaning an employee can temporarily rely on a tool to complete a specific task. Agent: The Agent is the specific role that executes the task. Crew
    A Crew is a team of multiple Agents working together to complete more complex tasks. Defining a Crew primarily involves defining a team of Agents and a list of Tasks, as well as a Process, which can be understood as the project execution process. Agents: Define the members of the Crew, i.e., which Agents are included. Tasks: Define the specific Tasks that need to be completed within the Crew. Process: The key to defining a Crew is defining the Process. In simple terms, it is a sequential structure where each task is processed one after the other, with the final task being the Crew's ultimate output. For advanced operations such as defining parallel and sequential tasks, Crew AI has a specialized Process class that will be elaborated on in subsequent chapters. Practical Application
    In this practical application, we will temporarily not involve Tools, and the Process will only need to be set as a sequential structure. The aim of this chapter is to understand the basic usage of Agent, Task, and Crew. So first, we import these three classes. from crewai import Agent, Task, Crew

    Next, we define two Agents: one for searching GameFi information and the other for recommending GameFi.

    gamefi_search_agent = Agent( role="GameFi Search Assistant", goal="Find the GameFi game's detailed according to the user's query,which is {query}", backstory="You are a GameFi Search Assistant, your task is to find the detailed page" " of the GameFi game according to the user's query. GameFi is the blockchain game" ) gamefi_recommend_agent = Agent( role="GameFi Recommend Assistant", goal="Recommend the GameFi game according to the user's query,which is {query}", backstory="You are a GameFi Recommend Assistant, your task is to recommend the GameFi game" " according to the user's query. GameFi is the blockchain game" )

    Define their tasks: one for searching GameFi and the other for recommendation.

    task_search = Task( description="find GameFi games according to {query}", expected_output="games' details", agent=gamefi_search_agent ) task_recommend = Task( description="recommend GameFi games according to {query}", expected_output="games' recommendation", agent=gamefi_recommend_agent )

    Finally, define a Crew to coordinate them.

    crew = Crew( agents=[gamefi_search_agent, gamefi_recommend_agent], tasks=[task_search, task_recommend], )

    After defining the Crew, call the kickoff method and pass in an inputs dictionary, where the keys correspond to the contents defined in {} for Agents and Tasks.

    inputs = {"query": "I need a fps gameFi game"} result = crew.kickoff(inputs=inputs)

    Finally, to run the program, you need to define the OpenAI API key in the environment variable.

    export OPENAI_API_KEY="Your API Key"

    Yes, Crew AI uses OpenAI's GPT model as the default LLM. Once the OpenAI API key is defined in the environment variable, the program will automatically obtain this key without additional operations in the code. Subsequent chapters will introduce how to use LLMs other than GPT.

    Next, you can run the program and observe the results.

  • Casual writing of try Integrating CrewAI with Gemini

    1
    0 Votes
    1 Posts
    64 Views
    R

    Our company has recently replaced GPT-3.5 with Gemini-Flash entirely. Gemini is Google's LLM, and the Gemini-Pro version has been available for over a year. While Gemini-Pro is comparable to OpenAI's GPT-4, its performance is noticeably inferior to GPT-4, not to mention OpenAI's latest GPT-4o.

    However, the recently launched Gemini-Flash-1.5 which targets GPT-3.5, surpasses GPT-3.5 significantly. It not only has a higher context token limit of 2.8 million (compared to GPT-3.5's maximum of 16K) but is also much cheaper. The table below compares the data of GPT-3.5-turbo-0125 and Gemini-Flash-1.5.

    a3ce5146-d449-4503-baaf-eb84dca4a300-1717636088365.jpg

    However, the Gemini ecosystem is still not as developed as GPT. For example, CrewAI currently does not have a ready-made method to connect with Gemini. According to the CrewAI Assistant created by the official CrewAI team, you can connect to Gemini using a custom tool. However, a custom tool cannot fully replace the need for GPT because the tool is only used when entering an Agent. Before entering an Agent, the default LLM is still required to handle certain tasks.
    The good news is Langchain now supports connecting to Gemini. It’s only need to install langchain-google-genai:

    pip install langchain-google-genai

    Therefore, I will next attempt to see if CrewAI can fully replace the default LLM with Gemini by leveraging Langchain.

  • Customizing Tools in CrewAI: Expanding Agent Capabilities

    1
    0 Votes
    1 Posts
    43 Views
    R

    Custom tools are a powerful and versatile way to extend the functionality of CrewAI agents. They are easy to create, requiring only the inheritance of CrewAI's baseTool class. Once inherited, the tool's name and description can be defined as follows:

    from crewai_tools import BaseTool class MyCustomTool(BaseTool): name: str = "Name of my tool" description: str = "What this tool does. It's vital for effective utilization." def _run(self, argument: str) -> str: # Your tool's logic here return "Tool's result"

    The _run method can accept multiple arguments, which CrewAI will automatically parse from the user's query. This behavior is similar to OpenAI's tools and can be further customized using an args schema for more precise argument parsing.
    In addition to inheriting from BaseTool, a simpler approach involves using the @tool decorator. For instance:

    from crewai_tools import tool @tool("Tool Name") def my_simple_tool(question: str) -> str: """Tool description for clarity.""" # Tool logic here return "Tool output"

    The method's docstring serves as the tool's description, making this approach more concise.
    Leveraging custom tools empowers CrewAI to connect with various databases, third-party APIs, and more.

  • Crew AI Advanced Study - Process

    1
    0 Votes
    1 Posts
    37 Views
    R

    In the Crew system, a Crew can be understood as a team project, with Agents and Tasks being the members and tasks within this team project. The order or structure in which these tasks are carried out requires the use of Crew AI's Process mechanism. Currently, there are two developed Processes.

    Sequential Processing
    This is very simple, as tasks are executed in sequence. In the code, you only need to define a Tasks list when defining the Crew, and it will default to sequential processing. You can also explicitly set the process to be sequential.

    Hierarchical Processing
    This is a bit more complex but also flexible enough. Typically, you need to define a manager_llm, which can be understood as a manager that automatically decides the execution order and subset relationships of each task. Of course, you can customize this manager through a prompt.
    In practice, if the tasks are too complex and numerous, this manager_llm will not always be 100% accurate. If you want to manually define the task execution order while supporting parallel and serial structures, you can define the callback method when defining tasks to trigger the next task.

  • 0 Votes
    1 Posts
    23 Views
    R

    Today, I attempted to use CrewAI's ScrapeWebsiteTool to establish a RAG system. It can scrape website content based on a given URL and directly use the content to respond without storing it.
    Using the GameFi consulting section as an example, here are the steps to achieve this with two agents:

    GameLinkSearchAgent: Uses ScrapeWebsiteTool to provide links to game listing pages and find links to detailed game pages. GameDetailAgent: Uses ScrapeWebsiteTool to search for answers within the specified webpage based on user queries and organize the response.

    Comparison with a regular RAG system:

    Regular RAG system: Offline work: Scrape content and store it in Vector DB. Online service: User query > Vector search > Document + AI analysis > Response. Online crawler RAG system: Offline work: None. Online service: User query > GameLinkSearchAgent > Link + GameDetailAgent > Response.

    It is evident that the online crawler RAG system is simpler and directly eliminates the offline work of building a VectorDB. However, it has a significant drawback: the response time is too slow.
    The following two images show the time comparison between the online crawler RAG system I tried and the regular RAG system I previously built. The average response time for the regular system is about 4 seconds, while the online crawler RAG system takes over 30 seconds.

    This is easy to understand why, as live scraping takes time. In contrast, Vector Search in a Vector DB is a very fast search method. It only requires parallel computation of the similarity between documents and the query, followed by quick sorting. Compared to finding content on a webpage, this method significantly saves time. Additionally, the documents in the vector database are already parsed content, eliminating the time needed for online parsing.
    Although completely eliminating the offline crawling work is still unrealistic in actual development, it does not mean that CrewAI is useless in this context. CrewAI can be used in two scenarios:

    Offline crawling + storing in vector database: Use CrewAI for offline crawling, parse the scraped content, and store it in a vector database. This approach improves the efficiency of crawling work development and does not affect the speed of resource retrieval for subsequent online services. Combining online crawler with RAG system: While a vector database can store a large number of documents, its capacity is limited and cannot cover all information, especially the latest news. Combining the online crawler setup with the RAG system can handle the latest content that has not yet been stored in the vector database. This method leverages the real-time nature of the online crawler to compensate for the limitations of the vector database.
  • Introduction to CrewAI

    1
    1 Votes
    1 Posts
    26 Views
    BernardB
    Introduction to CrewAI

    CrewAI is a multi-agent framework with a more humanized development concept. You act as a project manager, define various employees (agents), assign them different tasks, and finally define a team (crew). This team consists of various agents and subtasks, and the final task of the team is to fulfill the client's requirements.
    To quickly understand CrewAI, you only need to grasp its three basic elements: Agent, Task, and Crew.

    I. Agent

    An agent is the basic operational unit in CrewAI, similar to a virtual role with specific duties and capabilities. In CrewAI, an agent is defined by the following four basic elements, which are described in natural language in the code.

    1. Role
    The role is the identity and duty description of the agent. It defines the agent's position and function within the system, determining the agent's behavior and interaction with other agents.

    2. Goal
    The goal is the specific task or result that the agent needs to achieve. Each agent has a clear goal that guides its behavior and decision-making.

    3. Backstory
    The backstory provides context and background for the agent, helping to define its behavior and decision-making style. It may include the agent's hypothetical experiences, knowledge areas, and working style.

    4. Tools
    Tools are the specific resources or methods used by the agent to perform tasks. These can include external APIs, data processing libraries, search engines, etc. Agents can work without tools, but tools are not mandatory.

    1.1 Example

    Suppose we create a Research Specialist Agent.

    1. Role
    Research Specialist

    2. Goal
    Collect and organize detailed information about conference participants, including their background, company, and recent activities.

    3. Backstory
    An experienced researcher with extensive data analysis and information gathering experience, capable of efficiently mining and organizing relevant information to provide strong support for conference preparation.

    4.Tools
    Web scraping tool (ScrapeWebsiteTool), PDF reading tool (PDFSearchTool)

    II. Task

    A task is the specific work that an agent needs to complete. CrewAI defines a task through the following four elements:

    1. Description
    A detailed description of the task, clearly stating its purpose and content.

    2. Expected Output
    The specific results or products required upon task completion. This sets a clear standard for task completion, ensuring the agent knows what kind of result to deliver.

    3. Tools
    The specific resources or methods used by the agent in task execution. These tools are the same as those defined for the agent and can be temporarily relied upon for the task.

    4. Agent
    The specific role that will execute the task.

    2.1 Example

    Suppose we create a "Collect Conference Participants Information" task.

    1. Description
    Collect detailed background information on all conference participants, including their educational background, career experience, current job position, and responsibilities.

    2. Expected Output
    An Excel sheet containing detailed information for each participant, with columns for name, educational background, career experience, current position, and current company.

    3. Tools
    ExcelTool, a tool for creating Excel sheets.

    4. Agent
    Research Specialist Agent

    III. Crew

    A crew is a team composed of multiple agents working together to complete more complex tasks. Defining a crew mainly involves defining a team of agents, a list of tasks, and a process, which can be understood as the project execution process.

    1. Agents
    Define which members (agents) are in the crew.

    2. Tasks
    Define the specific tasks the crew needs to complete.

    3. Process
    The key to defining a crew is defining the process. Generally, this is a sequential structure where the next task starts after the previous one is completed, and the final task represents the crew's ultimate output. For advanced operations such as defining parallel and sequential tasks, CrewAI has a dedicated Process class, which is beyond the scope of this introductory chapter.

    3.1 Example

    Suppose we define a crew to send product quotations.

    1. Agents

    list itemProductInfoCollectorAgent: Responsible for collecting product information. CustomerInfoCollectorAgent: Responsible for collecting customer information. PricingCalculatorAgent: Responsible for calculating product prices. QuoteDocumentGeneratorAgent: Responsible for generating the quotation document. EmailSenderAgent: Responsible for sending the quotation via email.

    2. Tasks

    CollectProductInformationTask: Collect detailed product information. CollectCustomerInformationTask: Collect detailed customer information. CalculatePricingTask: Calculate prices based on product and customer information. GenerateQuoteDocumentTask: Generate the product quotation document. SendQuoteEmailTask: Send the quotation to the customer via email.

    3. Process

    Sequential structure.