Skip to content
  • Is the Beta test event still on?

    Discussions & QAs
    3
    0 Votes
    3 Posts
    42 Views
    L

    thx for the reply

  • 0 Votes
    1 Posts
    70 Views
    R

    The major differentiation lies in our ability to leverage Web3 technology to incentivize human annotators using tokens. The advantages of Web3 include:

    Decentralized Storage and Processing: Achieving secure storage and sharing of data through decentralized means. Smart Contract Management: Ensuring transparent and fair task management and payment settlement through smart contracts.

    Traditional centralized annotation service platforms typically do not involve direct management of annotators by the service itself. Instead, the service providers manage the annotators centrally. This approach is beneficial for effectively managing the quality and timeliness of annotations. However, with the advent of LLMs, annotation tasks no longer need to rely entirely on human effort. In many cases, LLMs can perform preliminary annotations, which are then enhanced by humans, significantly reducing the difficulty of human annotation.

    Additionally, we can design verification mechanisms for annotations using LLMs and employ token incentives to ensure the quality of annotations performed by workers.

  • 0 Votes
    1 Posts
    61 Views
    TypoX_AI ModT

    Frame 27.png

    At WWDC 2024, Apple garnered global attention by announcing the integration of ChatGPT into iOS 18. This move not only highlights Apple’s innovative leadership in the field of artificial intelligence but also underscores the importance of localizing, miniaturizing, and personalizing AI models. As technology advances, AI models are gradually moving towards device localization, bringing users a safer and more efficient experience. In this wave of innovation, TypoCurator emerges as a revolutionary data annotation tool, dedicated to enhancing the performance and transparency of AI models through user participation and blockchain technology.

    Let's TypoCurator Now: https://t.me/typocurator_bot/TypoCurator

    What is TypoCurator?

    TypoCurator is a data annotation tool embedded in Telegram Mini Apps. Users can assist in optimizing customized AI models by selecting the best answers and earning $TPX token rewards through data annotation. This innovative platform not only simplifies the data annotation process but also ensures the transparency and security of the reward system through blockchain technology.

    Four Major Advantages of TypoCurator

    Seamless Access, Easy to Use
    In today’s fast-paced world, the convenience of user experience is paramount. TypoCurator users only need a smartphone and an internet connection to log in via Telegram and start annotation tasks. There’s no need to download additional apps or undergo cumbersome registration processes, ensuring broad user coverage and lowering the barrier to entry.

    Native Incentives, Instant Redemption
    TypoCurator integrates natively with Telegram’s TON blockchain. Upon completing annotation tasks, users can directly receive $TPX tokens. These tokens are instantly credited to the user’s TON blockchain wallet, allowing users to view and manage their rewards at any time, and withdraw tokens for other blockchain operations. This native incentive mechanism significantly boosts user engagement and trust.

    Decentralized Annotation, Inclusive Diversity
    TypoCurator adopts a decentralized annotation method, breaking the limitations of language, region, and culture. No matter where you are, you can participate and contribute your wisdom and insights. This low-barrier participation ensures the broad representativeness and high quality of data, avoiding biases inherent in traditional annotation methods, thereby providing a solid foundation for optimizing AI models.

    Security and Transparency, Trustworthy
    In an era where data privacy and security are of growing concern, TypoCurator leverages blockchain technology to ensure the transparency and immutability of every transaction. Every participant’s contribution is fairly recorded, and smart contracts ensure fair and efficient reward distribution, greatly enhancing user trust.

    The Significance of Decentralized Annotation for AI Model Miniaturization, Privatization, and Localization

    Decentralized data annotation is not just a technical innovation; it is crucial for promoting the miniaturization, privatization, and localization of AI models. Here are its key roles:

    Data Diversity and Representativeness
    Decentralized annotation allows users from diverse backgrounds worldwide to participate, ensuring data diversity and representativeness. This diverse data is essential for training more generalized and robust AI models, helping to reduce model bias and improve performance across different scenarios.

    Local Processing and Privacy Protection
    With local processing, user data can be annotated and processed directly on the device, avoiding the risks associated with uploading data to the cloud. This not only enhances data processing speed but also greatly protects user privacy. TypoCurator ensures data security through local annotation and decentralized storage.

    Training Miniaturized Models
    Decentralized annotation can provide rich and high-quality datasets for training miniaturized AI models. With distributed computing and edge computing technologies, AI models can be trained and optimized on local devices, reducing reliance on large data centers and improving computational efficiency.

    Personalized Services
    As AI models become localized, users can customize models according to personal needs and preferences. Decentralized annotation not only provides rich data support but also ensures the privacy and immutability of data through blockchain technology, offering users more personalized and trustworthy AI services.

    Apple’s Localized AI Models: Enhancing Privacy and Efficiency

    Apple’s localized and integrated AI model technology showcased at WWDC 2024 not only enhances user experience but also ensures data privacy and security. By running AI models locally on devices, user data is processed faster and privacy is better protected. TypoCurator similarly focuses on user privacy and data security, ensuring all data processing is conducted on user devices, reducing the risk of data breaches through decentralized and blockchain technologies.

    Who Can Benefit from TypoCurator?

    Data Annotators
    No need for complex technical backgrounds; simply answer questions and select the best answers to earn $TPX token rewards, easily transforming knowledge and skills into value.

    AI and Blockchain Enthusiasts
    Participate in data annotation tasks to gain a deep understanding of AI model workings and optimization processes while earning token rewards to purchase learning resources or other uses.

    Web3 Developers
    Obtain high-quality datasets through TypoCurator to optimize customized AI models, enhance development efficiency and model quality, and earn additional token rewards through annotation tasks.

    Future Vision: Connecting the World, Sharing Wisdom

    Against the backdrop of rapid global technological advancements, TypoCurator’s mission is to provide an open, inclusive, and efficient data annotation platform through decentralization and blockchain technology. We believe that every user’s participation will inject new momentum into the development of AI technology. By sharing wisdom and insights, TypoCurator not only enhances AI model performance but also drives technological innovation and progress.

    Join TypoCurator and be part of the global wisdom-sharing movement. Let’s break barriers and share the future together!

    Visit TypoCurator or explore more through Telegram Mini Apps.

  • TypoCurator: Transforming Decentralized Data Management

    Product Release
    1
    0 Votes
    1 Posts
    33 Views
    TypoX_AI ModT

    TypoCurator is an essential part of the TypoX AI ecosystem, functioning as a decentralized tagging and data curation protocol that leverages global user intelligence to deliver high-quality Web3 intent datasets. Through TypoCurator, we are redefining data management and tagging, introducing innovation and efficiency to the Web3 world.

    Key Features:

    list itemGlobal Collaboration: Utilize the collective intelligence of millions of Web3 enthusiasts to create top-tier datasets. The Typo Curation Protocol enables decentralized data management, improving data processing efficiency and user experience.
    Automated Data Tagging: Integrate AI with human intelligence to automate data tagging and curation. Typo Intent OS provides a specialized AI fine-tuning framework, enhancing AI intent recognition in Web3 scenarios.
    Instant Rewards and $TPX Incentive System: Participants can earn immediate $TPX rewards by completing tagging tasks, fostering global user engagement and establishing an economically sustainable data management model.
    Gamified Tasks: A Telegram mini-app lets users create intent labeling datasets by solving puzzles, offering a gamified experience.

    TypoCurator is not only a robust tool for data scientists and developers but also generates millions of remote job opportunities for the global Web3 community through innovative incentive mechanisms. Our aim is to offer a more efficient, cost-effective solution for the Web3 ecosystem via a decentralized data management economic model.

  • Labeling of LLM

    AI & DePIN
    1
    0 Votes
    1 Posts
    50 Views
    R

    Currently, discussions about training LLMs primarily revolve around fine-tuning or LoRA (Low-Rank Adaptation). Pre-training LLMs is not within the scope of this discussion (as pre-trained LLMs usually do not require manual annotation). The current annotation methods can be classified into three main categories: scoring, ranking, and text annotation.

    Scoring:
    This involves rating the responses generated by the LLM. The most basic form is to rate the overall satisfaction with the response. However, beyond overall satisfaction, scoring can be detailed into several directions, such as:
    Accuracy: Correctness of the answer.
    Completeness: Whether the answer includes all necessary information.
    Relevance: Whether the answer is directly related to the question.
    Language Quality: Clarity and fluency of the language used.
    Creativity: Whether the answer offers unique insights or information.

    Ranking:
    This method involves having the LLM generate several responses and then manually ranking these responses. The ranking criteria can be overall satisfaction or reference the indicators mentioned in the scoring section. Combining scoring and ranking with Reinforcement Learning from Human Feedback (RLHF) annotation systems can be very effective.

    Text Annotation:
    This involves providing a query with one or several manually written answers or annotating several manually written replies within a context. This annotation method is the most basic approach for fine-tuning LLMs, but it is also labor-intensive and complex, usually employed for injecting knowledge into the LLM by human experts.

  • ChatGLM4 vs ChatGPT4o

    AI & DePIN
    1
    0 Votes
    1 Posts
    92 Views
    R
    Effectiveness

    The overall effect of these two is almost on par, and GLM4 also supports multimodal input and output like GPT4o, but multimodal capability of GLM4 are inferior to GPT4o. GLM4 performs slightly better in a Chinese environment, with GLM4’s performance when asked questions in Chinese being slightly superior to GPT4o’s. When asked questions in English, GLM4’s performance is not as good as GPT4’s, but the difference is not significant. If we define the effectiveness score of GPT4o when asked questions in English as 5, then:

    f52741c5-2245-4307-9555-1d243b63ee33-image.png

    Political correctness: GLM4 relies on China’s political correctness, while GPT4o relies on Western political correctness.

    Price and Token Count:

    933891f3-9202-4bc9-93ff-dc9d59c55e7f-image.png

    The price is calculated based on the exchange rate of 1 US Dollar to 7.24 Chinese Yuan.
    Overall, GLM is slightly more expensive than GPT. Below is the various GLM models in detail.
    GLM-4-0520 is the best GLM4 model.
    GLM-4-Air has better performance than GPT3.5-turbo but is slightly slower.
    GLM-4-Airx has the same effect as GLM-4-Air but is 2.6 times faster, about the same speed as GPT3.5-turbo.
    GLM-4-Flash is only suitable for simple tasks.

    Ecosystem:
    The ecosystem of GLM4 is far inferior to that of GPT. As of (2024-6), langchain only supports up to GLM3, not to mention frameworks like CrewAI for Multi-Agent Systems.

    Open Source:
    Neither is open-sourced, but GLM3 is open-sourced.

  • Connecting CrewAI to Google's Gemini

    AI & DePIN
    2
    0 Votes
    2 Posts
    116 Views
    M

    Great project

  • 0 Votes
    1 Posts
    26 Views
    R

    Crew AI is a powerful and easy-to-use multi-agent framework designed to simplify the AI development process and provide users with flexible tools and resources. In this series of articles, I will share insights and techniques I have gained while learning Crew AI, and demonstrate how to apply this knowledge to practical projects. This is not only a detailed Crew AI tutorial but also showcases its practical application through the development of a GameFi consultation system.

    There are three key elements in the Crew AI framework: Agent, Task, and Crew. An Agent is an entity that performs specific tasks, a Task is a set of instructions defining the actions of an Agent, and a Crew is the system that manages and coordinates the Agents. The entire development philosophy is highly personified, akin to you playing the role of a project manager, defining various employees (Agents), assigning them different tasks (Tasks), and ultimately creating a team (Crew) to fulfill the project's requirements for your clients.

    This chapter will briefly introduce these three elements and create a simple GameFi recommendation system.

    Agent
    An Agent is the basic operational unit in Crew AI, similar to a virtual character with specific duties and capabilities. In Crew AI, an Agent is defined by the following four basic elements, which are essentially a single natural language sentence in the code. Role: The role describes the identity and responsibilities of the Agent, defining its position and function within the system. The role determines how the Agent behaves and interacts with other Agents. Goal: The goal is the specific task or result the Agent needs to achieve. Each Agent has a clear goal that guides its behavior and decisions. Backstory: The backstory provides context and circumstances for the Agent, helping to define its behavior and decision-making style. The backstory can include the Agent's hypothetical experiences, knowledge domain, and working style. Tools: Tools are the specific resources or methods the Agent uses to perform tasks. Tools can be external APIs, data processing libraries, search engines, etc. Note that an Agent can still function without tools; they are not mandatory. Task
    A Task is the specific work an Agent needs to complete, defined in Crew AI by the following four elements. Description: The description is a detailed explanation of the task, clearly stating its purpose and content. Expected Output: The expected output defines the specific result or product required upon task completion. It sets clear standards for task completion, ensuring the Agent knows what kind of result to deliver. Tools: Tools are the specific resources or methods the Agent uses while performing the task. They are the same as the tools defined for an Agent, meaning an employee can temporarily rely on a tool to complete a specific task. Agent: The Agent is the specific role that executes the task. Crew
    A Crew is a team of multiple Agents working together to complete more complex tasks. Defining a Crew primarily involves defining a team of Agents and a list of Tasks, as well as a Process, which can be understood as the project execution process. Agents: Define the members of the Crew, i.e., which Agents are included. Tasks: Define the specific Tasks that need to be completed within the Crew. Process: The key to defining a Crew is defining the Process. In simple terms, it is a sequential structure where each task is processed one after the other, with the final task being the Crew's ultimate output. For advanced operations such as defining parallel and sequential tasks, Crew AI has a specialized Process class that will be elaborated on in subsequent chapters. Practical Application
    In this practical application, we will temporarily not involve Tools, and the Process will only need to be set as a sequential structure. The aim of this chapter is to understand the basic usage of Agent, Task, and Crew. So first, we import these three classes. from crewai import Agent, Task, Crew

    Next, we define two Agents: one for searching GameFi information and the other for recommending GameFi.

    gamefi_search_agent = Agent( role="GameFi Search Assistant", goal="Find the GameFi game's detailed according to the user's query,which is {query}", backstory="You are a GameFi Search Assistant, your task is to find the detailed page" " of the GameFi game according to the user's query. GameFi is the blockchain game" ) gamefi_recommend_agent = Agent( role="GameFi Recommend Assistant", goal="Recommend the GameFi game according to the user's query,which is {query}", backstory="You are a GameFi Recommend Assistant, your task is to recommend the GameFi game" " according to the user's query. GameFi is the blockchain game" )

    Define their tasks: one for searching GameFi and the other for recommendation.

    task_search = Task( description="find GameFi games according to {query}", expected_output="games' details", agent=gamefi_search_agent ) task_recommend = Task( description="recommend GameFi games according to {query}", expected_output="games' recommendation", agent=gamefi_recommend_agent )

    Finally, define a Crew to coordinate them.

    crew = Crew( agents=[gamefi_search_agent, gamefi_recommend_agent], tasks=[task_search, task_recommend], )

    After defining the Crew, call the kickoff method and pass in an inputs dictionary, where the keys correspond to the contents defined in {} for Agents and Tasks.

    inputs = {"query": "I need a fps gameFi game"} result = crew.kickoff(inputs=inputs)

    Finally, to run the program, you need to define the OpenAI API key in the environment variable.

    export OPENAI_API_KEY="Your API Key"

    Yes, Crew AI uses OpenAI's GPT model as the default LLM. Once the OpenAI API key is defined in the environment variable, the program will automatically obtain this key without additional operations in the code. Subsequent chapters will introduce how to use LLMs other than GPT.

    Next, you can run the program and observe the results.

  • Casual writing of try Integrating CrewAI with Gemini

    AI & DePIN
    1
    0 Votes
    1 Posts
    71 Views
    R

    Our company has recently replaced GPT-3.5 with Gemini-Flash entirely. Gemini is Google's LLM, and the Gemini-Pro version has been available for over a year. While Gemini-Pro is comparable to OpenAI's GPT-4, its performance is noticeably inferior to GPT-4, not to mention OpenAI's latest GPT-4o.

    However, the recently launched Gemini-Flash-1.5 which targets GPT-3.5, surpasses GPT-3.5 significantly. It not only has a higher context token limit of 2.8 million (compared to GPT-3.5's maximum of 16K) but is also much cheaper. The table below compares the data of GPT-3.5-turbo-0125 and Gemini-Flash-1.5.

    a3ce5146-d449-4503-baaf-eb84dca4a300-1717636088365.jpg

    However, the Gemini ecosystem is still not as developed as GPT. For example, CrewAI currently does not have a ready-made method to connect with Gemini. According to the CrewAI Assistant created by the official CrewAI team, you can connect to Gemini using a custom tool. However, a custom tool cannot fully replace the need for GPT because the tool is only used when entering an Agent. Before entering an Agent, the default LLM is still required to handle certain tasks.
    The good news is Langchain now supports connecting to Gemini. It’s only need to install langchain-google-genai:

    pip install langchain-google-genai

    Therefore, I will next attempt to see if CrewAI can fully replace the default LLM with Gemini by leveraging Langchain.

  • 0 Votes
    1 Posts
    71 Views
    TypoX_AI ModT

    Запуск бета-версии мини-приложения Typo Curator.png

    Добро пожаловать в Typo Curator: 🎮 Что такое Typo Curator?
    Погрузитесь в наше передовое мини-приложение, созданное для превращения маркировки ИИ в игру! Typo Curator приглашает вас решать головоломки и помогать создавать необходимые наборы данных, зарабатывая при этом $TPX за ваши вклады. 🚀 Рост мощностей с помощью TON:
    Используя все возможности экосистемы TON для приобретения, расходования и стейкинга $TPX, Typo Curator быстро развивается, повышая привлекательность приложения и способствуя использованию $TPX.

    TypoCurator Features.png

    Детали мероприятия

    ⏰ Время запуска Typo Curator:

    Большой день – 13 июня, 06:00 по московскому времени! Присоединяйтесь к нашему обратному отсчету и станьте частью волнения, ведь это не просто запуск приложения; это ваш шанс внести свой вклад в экосистему TON и заработать награды.

    💡Участвуйте и получайте награды:

    Тестируй и зарабатывай: принимайте участие в бета-тестировании Typo Curator и зарабатывайте награды $TPX! Фонд наград ограничен, награды распределяются по принципу кто первый пришел, тот первый обслужен. Быстрая выдача наград: участники бета-тестирования, радуйтесь! Вы получите свои $TPX в течение 24 часов после участия.

    TypoCurator.png

    💸 ПОЧЕМУ ЭТУ КАМПАНИЮ НАЗЫВАЮТ ВЫКУПОМ 3000 $TON?

    3000 $TON выкуп: почему такое название? Всё просто - мы используем 3000 $TON для выкупа $TPX из текущего пула DEX в качестве наград. Этот подход обеспечивает сосредоточенное распределение на основе текущей рыночной стоимости $TPX, не увеличивая обращающееся предложение.

    Прозрачность
    Учитывая прозрачность и подлинность всего процесса, мы раскроем адрес выделенного казначейства после начала выкупа 3000 TON, что позволит сообществу контролировать весь процесс выкупа в блокчейне и гарантировать, что все выкупленные $TPX будут использоваться для стимулирования пользователей бета-тестирования.

    Кроме того, для предотвращения манипуляций с данными или фиктивной торговли будет доступен специальный веб-сайт, когда начнутся стимулы бета-тестирования, что позволит в реальном времени отслеживать очередь адресов пользователей теста, чтобы сообщество могло легко проверять прогресс.

    Готовьтесь к участию: Полные детали участия будут опубликованы во время запуска. Следите за нашими официальными каналами, чтобы получить всю необходимую информацию для начала.

    Присоединяйтесь к бета-тестированию в Telegram: ЗДЕСЬ
    Проверьте прогресс выкупа: ЗДЕСЬ
    🆎 View the event details in English

  • 0 Votes
    1 Posts
    336 Views
    TypoX_AI ModT

    Typo Curator迷你應用Beta版上線及$TPX贈送.png

    歡迎探索 Typo Curator: 🎮Typo Curator 是什麼?
    深入我們的前沿迷你應用程式,將 AI 標籤化變成遊戲!Typo Curator 邀請您解謎並幫助建立必要的數據集,同時為您的貢獻賺取 $TPX。 🚀通過 TON 助力成長:
    通過充分利用 TON 生態系統的原生能力來獲得、支出和抵押 $TPX,Typo Curator 即將迅速擴展,增強應用採納並推廣 $TPX 的使用。
    Solving.png 活動詳情

    ⏰ Typo Curator 啟動上線時間:

    就在6 月 13 日上午 11:00 (UTC+8) ! 加入我們的倒計時,成為這激動人心的一部分,這不僅僅是一個應用程序的發布;這是您進入 TON 生態系統並賺取獎勵的入口。

    💡參與 & 獎勵:

    測試以賺取:參與 Typo Curator 的 beta 測試以賺取 $TPX 獎勵! 獎勵池有限,先到先得。 快速獎勵發放:Beta 測試者歡呼!您將在參與後 24 小時內收到您的 $TPX。

    TypoCurator.png

    💸 為什麼這次發起活動被稱為 3000 TON 回購?

    為什麼這個名字?這很簡單——我們使用 3000 TON 從當前的 DEX 池回購 $TPX 作為獎勵。這種方法確保了基於當前市場價值的 $TPX 的集中分配,而不會增加流通供應。

    回購透明公開
    鑑於對整個流程的透明性和真實性的考量,我們將在3000 TON回購開始後,公布專用的treasury地址,以便社區在鏈上監控整個回購過程,並確保所有回購的$TPX用於Beta測試用戶的激勵。

    此外,為防止刷單或數據操縱,Beta測試激勵上線時,將提供一個專門的網站,實時追踪測試用戶的地址隊列,方便社區查詢進度。

    準備加入活動: 完整的參與細節將隨著我們的發布而公布。 請關注我們的官方渠道以獲得開始所需的所有信息。

    ➡ 加入 Beta 測試 Telegram:點擊加入
    ➡ 檢查回購進度:點擊查詢
    🆎 View the event details in English

  • TypoX Community Ambassador Recruiting Plan

    Ambassadors & Leaderships
    1
    0 Votes
    1 Posts
    79 Views
    TypoX_AI ModT

    Background
    TypoX AI is dedicated to enabling a broader range of users to better connect with Web3 application scenarios and innovative businesses. As a blockchain deeply integrated with Telegram and retail user scenarios, TON will bring nearly one billion users into Web3.

    TypoX AI aims to become the most important AI protocol within the TON ecosystem, accelerating Web3 mass adoption. Through TypoX x TON, more people will be empowered to contribute to the transformation of AI technology.

    Here is the revised version with improved grammar and clarity, in both Chinese and English:

    English Version
    Operating Mechanism
    As an exploratory attempt, TPX Treasury will allocate a budget of 3,000 $TPX (for first month) to recruit 5-8 community members as the first batch of community ambassadors.

    During their tenure, ambassadors will assist the community management team with various operational tasks, such as distributing important community information, answering community questions, compiling content for the community forum, and establishing and maintaining community rules. Task assignments will be arranged based on members’ preferences and actual circumstances through consultation.

    The detailed initial action guide will be announced later and maintained and iterated through future community governance meetings and activities.

    How to Participate
    Fill out the form and submit your intention. The community administrator will contact you shortly.
    🔗 Submit here: https://forms.gle/Gq6j9atuUVGYyMCf8

    Chinese Version
    運行機制
    作為探索性嘗試,TPX Treasury將準備3000 TPX的預算招募5-8名社區成員,成為第一批社區大使。

    運作期間,大使將協助社區管理組處理各項運營事務,如社群重要信息分發、社區問題答疑、社區論壇內容編撰、社區規則搭建和維護等。任務分工將根據成員喜好和實際情況協商安排。

    具體的初版行動指南將在稍後公佈,並通過未來的社區治理會議和活動進行維護和迭代。

    如何參與
    填寫表單,提交你的意向,社區管理員稍後將會與你取得聯繫。
    🔗 提交鏈接:https://forms.gle/Gq6j9atuUVGYyMCf8

  • Customizing Tools in CrewAI: Expanding Agent Capabilities

    AI & DePIN
    1
    0 Votes
    1 Posts
    53 Views
    R

    Custom tools are a powerful and versatile way to extend the functionality of CrewAI agents. They are easy to create, requiring only the inheritance of CrewAI's baseTool class. Once inherited, the tool's name and description can be defined as follows:

    from crewai_tools import BaseTool class MyCustomTool(BaseTool): name: str = "Name of my tool" description: str = "What this tool does. It's vital for effective utilization." def _run(self, argument: str) -> str: # Your tool's logic here return "Tool's result"

    The _run method can accept multiple arguments, which CrewAI will automatically parse from the user's query. This behavior is similar to OpenAI's tools and can be further customized using an args schema for more precise argument parsing.
    In addition to inheriting from BaseTool, a simpler approach involves using the @tool decorator. For instance:

    from crewai_tools import tool @tool("Tool Name") def my_simple_tool(question: str) -> str: """Tool description for clarity.""" # Tool logic here return "Tool output"

    The method's docstring serves as the tool's description, making this approach more concise.
    Leveraging custom tools empowers CrewAI to connect with various databases, third-party APIs, and more.

  • 🏆 Winners of TypoX AI Zealy Sprint 2

    Zealy
    1
    0 Votes
    1 Posts
    34 Views
    TypoX_AI ModT

    🎉 Congratulations to the winners of TypoX AI Zealy Sprint 2! 🏆

    A huge thank you to everyone who participated. We have distributed the rewards on BNB Chain based on the rankings as follows, please check your wallet on BNB Chain Network:

    🥇 1st Place: $50
    🥈 2nd Place: $30
    🥉 3rd Place: $10
    🎖️ 4th-25th Place: $5 each

    If you have any questions, feel free to DM us in our Telegram group: https://t.me/TypoGraphyAI.

    Please note that future rewards will be integrated with the TON ecosystem and TPX Token market activities.

    Stay tuned for more updates! 🚀

    20b6bb8f-96c5-41aa-bd2e-9b29b06d7920-image.png

  • Everything you need to know about TPX.

    TPX Token Ops & Governance
    5
    0 Votes
    5 Posts
    372 Views
    A

    @chiangmai Thanks for your help, I managed to file a claim

  • Exciting News: TypoX AI IDO Launch!

    Moved TPX Token Ops & Governance
    5
    0 Votes
    5 Posts
    104 Views
    A

    @chiangmai I can't reply to your message, the chat won't let me through

  • 0 Votes
    1 Posts
    287 Views
    TypoX_AI ModT

    Typo Curator Mini App Beta Launch & $TPX Giveaway: 3000 $TON Buyback Event

    Welcome to Typo Curator:

    🎮 What's Typo Curator?
    Dive into our cutting-edge mini app, designed to turn AI labeling into a game! Typo Curator invites you to solve puzzles and help build essential datasets, all while earning $TPX for your contributions.

    🚀 Powering Growth with TON:
    By fully utilizing the TON ecosystem's native capabilities for acquiring, spending, and staking $TPX, Typo Curator is set to expand quickly, enhancing app adoption and promoting $TPX usage.

    Solving.png

    Event Details

    ⏰ Typo Curator Launch Timing:

    The big day is June 13 at 11:00 AM (UTC+8) ! Join our countdown and be part of the excitement as this isn't just an app launch; it’s your gateway to contributing to the TON ecosystem while earning rewards.

    💡 Participate & Rewards:

    Test to Earn: Participate in the Typo Curator beta testing to earn $TPX rewards!
    Reward pool is limited, first-come, first-serve. Fast Reward Distribution: Beta testers rejoice! You’ll receive your $TPX within 24 hours of participation.

    TypoCurator.png

    💸 WHY IS THIS LAUNCH CAMPAIGN CALLED THE 3000 TON BUYBACK?

    3000 TON Buy Back: Why this name? It’s simple—we're using 3000 TON to buy back $TPX from the current DEX pool for rewards. This approach ensures a focused distribution based on the current market value of $TPX, without increasing the circulating supply.

    Transparency
    Considering the transparency and authenticity of the entire process, we will disclose a dedicated treasury address after the start of the 3000 TON buyback, allowing the community to monitor the entire buyback process on-chain and ensuring that all repurchased $TPX is used for Beta testing user incentives.

    Additionally, to prevent wash trading or data manipulation, a dedicated website will be available when Beta testing incentives go live, allowing real-time tracking of the test user address queue for the community to easily check progress.

    🔔 Get Ready to Jump In: Full participation details will be released as we launch. Stay tuned to our official channels to get all the information you need to get started.

    Join Beta Testing Telegram: HERE
    Check buyback progress: HERE
    🈯 查看中文版活動詳情

  • Crew AI Advanced Study - Process

    AI & DePIN
    1
    0 Votes
    1 Posts
    45 Views
    R

    In the Crew system, a Crew can be understood as a team project, with Agents and Tasks being the members and tasks within this team project. The order or structure in which these tasks are carried out requires the use of Crew AI's Process mechanism. Currently, there are two developed Processes.

    Sequential Processing
    This is very simple, as tasks are executed in sequence. In the code, you only need to define a Tasks list when defining the Crew, and it will default to sequential processing. You can also explicitly set the process to be sequential.

    Hierarchical Processing
    This is a bit more complex but also flexible enough. Typically, you need to define a manager_llm, which can be understood as a manager that automatically decides the execution order and subset relationships of each task. Of course, you can customize this manager through a prompt.
    In practice, if the tasks are too complex and numerous, this manager_llm will not always be 100% accurate. If you want to manually define the task execution order while supporting parallel and serial structures, you can define the callback method when defining tasks to trigger the next task.

  • 0 Votes
    1 Posts
    29 Views
    R

    Today, I attempted to use CrewAI's ScrapeWebsiteTool to establish a RAG system. It can scrape website content based on a given URL and directly use the content to respond without storing it.
    Using the GameFi consulting section as an example, here are the steps to achieve this with two agents:

    GameLinkSearchAgent: Uses ScrapeWebsiteTool to provide links to game listing pages and find links to detailed game pages. GameDetailAgent: Uses ScrapeWebsiteTool to search for answers within the specified webpage based on user queries and organize the response.

    Comparison with a regular RAG system:

    Regular RAG system: Offline work: Scrape content and store it in Vector DB. Online service: User query > Vector search > Document + AI analysis > Response. Online crawler RAG system: Offline work: None. Online service: User query > GameLinkSearchAgent > Link + GameDetailAgent > Response.

    It is evident that the online crawler RAG system is simpler and directly eliminates the offline work of building a VectorDB. However, it has a significant drawback: the response time is too slow.
    The following two images show the time comparison between the online crawler RAG system I tried and the regular RAG system I previously built. The average response time for the regular system is about 4 seconds, while the online crawler RAG system takes over 30 seconds.

    This is easy to understand why, as live scraping takes time. In contrast, Vector Search in a Vector DB is a very fast search method. It only requires parallel computation of the similarity between documents and the query, followed by quick sorting. Compared to finding content on a webpage, this method significantly saves time. Additionally, the documents in the vector database are already parsed content, eliminating the time needed for online parsing.
    Although completely eliminating the offline crawling work is still unrealistic in actual development, it does not mean that CrewAI is useless in this context. CrewAI can be used in two scenarios:

    Offline crawling + storing in vector database: Use CrewAI for offline crawling, parse the scraped content, and store it in a vector database. This approach improves the efficiency of crawling work development and does not affect the speed of resource retrieval for subsequent online services. Combining online crawler with RAG system: While a vector database can store a large number of documents, its capacity is limited and cannot cover all information, especially the latest news. Combining the online crawler setup with the RAG system can handle the latest content that has not yet been stored in the vector database. This method leverages the real-time nature of the online crawler to compensate for the limitations of the vector database.
  • S1&S2 Airdrop

    Discussions & QAs
    3
    1 Votes
    3 Posts
    139 Views
    K

    Succesful investors and who seek for longterm will not be affected because that's really how the market works and they experienced it before so it's natural for them. We can not stop the fluctuation of the price and it is normal. Only those who seek in short term goals are affected mostly here because they can't sell it in their desire price and make money instantly. Speculation and their wrong biases can affect them to make a wrong decision. I don't mind the price, we are still early and this project is on my list in my longterm projects.
    #LFG #WAGMI