Phoenix Tech & Ecosystem AMA — Aug 2023

Phoenix
9 min readSep 5, 2023

--

This past August, the Phoenix community was asked to provide tech & ecosystem related questions. Answers are provided below by the corresponding Phoenix Development teams.

August 2023 Ecosystem & Tech AMA with Phoenix development teams

Q1. The computation layer is showing some great metrics of usage. Does the team have visibility into who is using it (individuals, companies, etc.), what they are using it for (what specific types of computations — can we get an example), and how their end results are used (what did they use the results for)?

For the CCD that’s shown in the control panel that has been used, has the equivalent amount of PHB been burned?

Jet Liu (Director of Product)

Yes indeed the usage and adoption of the computation layer has been steadily growing not only in terms of users and AI jobs running, but also the depth in which it’s used. The numbers currently sit at over 700 users and over 1300 continuous jobs running on the platform, which is roughly 2.5x of that of 3 months ago. The users according to our internal analytics vary from individual developers, our enterprise pilots and partners, as well as development teams (mostly data science and analytics teams from Web 3, e-commerce, financial services, and gaming from what we’ve seen).

The current largest segment is actually individual developers, which in many cases are algo traders, which is expected — and is good, because we consider this segment as “retail users” and fits with our goal of increasing AI accessibility. We expect the growth to even accelerate more with the launch of SkyNet.

Currently over 90% of the users use our Computation Layer for AI-related tasks, such as training machine learning models, such as Deep Learning or unsupervised learning. The use case or problem at hand each user is using the Computation Layer to solve is a private manner and not disclosed to us, but from what we typically see it’s for data science models, AI-driven prediction, predictive analytics, and for non-developer users who wish to use easy codeless AI deployment, which is one of the key features of our platform.

Of the users on the platform, around 60% uses the platform via the Computation Layer’s API/SDK, in which allows them to deploy AI models with simple code, whereas around 40% of the users use codeless deployment, which means they create the jobs using the Control Panel’s user interface.

Of the CCD usage in the control panel, at least that number of CCD has been mined/swapped over from PHB, which has been burned. The CCD usage in the Computation Layer dashboard shows CCD that has already been used for performing various jobs and tasks. The current swap ratio from PHB to CCD is 1:5, and stay at this ratio until adjusted or when CCD has its own liquidity.

The Computation Layer is an important part of our Hybrid Staking initiative, and we will see that unfold as well after the launch of SkyNet.

Q2. Can you explain what AlphaNet aims to be and offer when the product is fully released? Do you have an estimation when the full product will be released?

Tiger Li, Head of Research @ Tensor Investment Corporation

That’s a good question that I think many people do not grasp the significance of quite yet. AlphaNet is a native dApp of Phoenix that is essentially a highly differentiated AI platform for the crypto trading market, helping professional traders gain trading edges (advantages) originally only available to trading firms with machine learning and data science teams. The AI capabilities and models of AlphaNet are built upon Phoenix’s Computation Layer, which leverages its computational resources via the AI Node Network. You can literally see which AlphaNet’s AI processes running and how much resources it’s using on the Computation Layer via the Control Panel.

Here’s a key concept — growing AlphaNet adoption within the crypto trading community essentially means also growing the Phoenix userbase, holders, and ecosystem in the same audience group. And which is the largest group of users in the entire crypto ecosystem? Yes, you guessed right, traders. The goal of AlphaNet as the premier dApp of Phoenix is to gain unprecedented utility and massive reach. If we empower the way that the most sophisticated crypto traders trade, then Phoenix will also grow fast and also win big.

AlphaNet focuses on meticulously developed and tested AI trading tools and models, hence quality over quantity, as we always emphasize. It’s been in beta for a while because we’ve been rigorously testing the next set of tools and trading strategies to be deployed on the platform for the next phase of Open Beta, with is coming very soon. Each version upgrade is expected to take AlphaNet to the next level — in the next phase we expect to invite trading KOLs and trading firms to come test the platform.

In the next release we expect to release an AI trading model that outperforms most crypto funds in terms of risk-adjusted return — I think this will be an inflection point that many more sophisticated individual and retail traders will realize the power of the platform. Although interestingly, there are a few crypto trading teams and some retail users that’s already using the AlphaNet first Public Beta release profitably to complement their existing trading strategies.

Initially the platform is geared mainly towards teams, whales, and the most sophisticated retail traders — but as time goes on and iterations and upgrades later, there will likely be something for everybody. We will also expect to develop AlphaNet marketing and referral programs as we increase feature/product variety and selection.

Q3. What technological advantages over other players in the industry such as FetchAI, SingularityNET or Bittensor does Phoenix offer?

Jimmy Hu, Head of Tech & Ecosystem

An excellent question — I was hoping someone asked! These are projects with different focuses albeit with some similarities that I will get into.

Fetch.ai is one of my favorite projects in the space — they focus on intelligent autonomous agents that can perform automation related tasks, and some of these are machine learning-based. I would say that Fetch is analogous to a Web 3-based RPA (robotic process automation) platform, which focuses on automation of various processes and increasing efficiency. Fetch revolves around its somewhat abstract “agent” concept, in which various agents help the user complete different tasks.

Bittensor is a very interesting project — hard to understand for some but quite ambitious. The project is focused building a completely new decentralized method of machine learning computation that can be scaled across devices, even ones with less computational capacity, and without requiring GPUs. It has its own LLM that supposedly runs on this network, and a chatbot called HAL. Bittensor however because of its novel approach, isn’t likely to be compatible with mainstream AI frameworks such as TensorFlow and Pytorch, nor mainstream machine learning models and methods.

SingularityNet on the other hand, I can’t say much about nor know much about its main focus other than having an AGI-focused theme and a marketplace.

Here’s how Phoenix differentiates from the rest of the pack:

1) Phoenix Computation Layer is a comprehensive decentralized AI compute network that is compatible with mainstream AI frameworks, models, and methods. This includes everything from deep learning, unsupervised learning, reinforcement learning (i.e. AlphaGo), to LLMs (large language models). The use case spans from predictive analytics, data science, enterprise machine learning, to vertical use cases such as AI for trading (i.e. AlphaNet) and for visual processing (i.e. NYBL).

2) Phoenix’s AI compute infrastructure focuses on three aspects — i) scaling and ii) accessibility and iii) decentralization. Scaling refers to the ability to scale various AI models and large amounts of data across our AI Node Network. Accessibility refers to making AI more accessible, such as codeless deployment via our Control Panel or simplified process via our API. Decentralization refers to the decentralized compute resources, which results in higher cost-efficiency and long term network scale.

3) Phoenix depending on the AI model that needs to be run on the network, can utilize both CPU and GPU resources — this enables the most efficient and cost-effective computation resources for any given task.

4) Utility and growth of the Phoenix ecosystem runs on two parallel tracks i) users who use Phoenix Computation Layer and SkyNet as an infrastructure platform, meaning that they use the Control Panel or API for their own AI tasks ii) users who use native dApps including the likes of AlphaNet, they are effectively indirectly growing Phoenix Computation Layer usage and PHB utility adoption.

5) Phoenix tech development team spans over 3 organizations (APEX Technologies, FLC, Tensor Investment) and includes a diverse array of AI, data science, and data infrastructure experts. In addition to that we also have experts in industry and vertical applications of AI, including trading (from Tensor), retail, ecommerce, and automotive (from APEX), and financial services and healthcare (from FLC).

Q4. Does the team still believe that AI on blockchain is the future?

Jimmy Hu, Head of Tech & Ecosystem

AI is the next largest tech revolution in the foreseeable future, and it has only just started — decentralized AI is a part of that future. There are various problems facing AI — including but not limited to scalability, resource limitations, data privacy, accessibility/democratization and ethical concerns. Decentralized technologies and protocols can address and remediate some of these issues to a varying degree.

Phoenix Core Development believes that through rapid empirical experimentation, meticulous execution, and steady adoption, there is a very prominent path that can be carved out for AI in the Web 3 ecosystem, and that we are one of the best groups cut out for the job.

Q5. When was the decision made to create an LLM like chat-GPT? It feels like it was created quite quickly, if so, how was this possible? Who is it developed by, and who will use it?

Phoenix Core Development

The decision to include an LLM as a part of the Computation Layer was not one that was made recently. Phoenix is positioned as a comprehensive decentralized AI compute infrastructure — hence it’s logical that a full spectrum of mainstream and open-source AI technologies will be available via the platform.

The PhoenixLLM has been developed by teams over at Tensor in conjunction with those at FLC. There are NLP, deep learning, and generative AI experts on both teams with various years of research and development experience.

As it will be launching soon, we welcome developers and users to come and try it. PhoenixLLM is different as it’s a Large Language Model compute service — meaning that you can utilize PhoenixLLM’s proprietary features as well as deploy mainstream open source LLMs such as Llama. PhoenixLLM’s proprietary models will focus around vertical knowledge use cases such as Web 3, tech/software, and trading/markets.

Q6. What is the current team size working full time for PHB?

Phoenix Core Development

Great question — there are some that may think that Phoenix is a small project, this couldn’t be further from the truth.

There are 27 full-time team members of Phoenix and over 60 ad hoc part-time experts and developers that have participated and will continue to participate in Phoenix’s various tech and product development from the 3 organizations that’s a part of Phoenix DAO.

Q7. Can you give more details about the joint solutions that Phoenix and JD.com will work on together?

Phoenix has a comprehensive ecosystem partnership with JD via APEX Technologies, which has a cloud, AI, and software partnership framework with JD. There are two main areas in which involves Phoenix: 1) JD Cloud’s, and JDC’s ecosystem contribution of compute resources to SkyNet and 2) integration of JD’s proprietary AI technology within the SkyNet platform for enterprise use at much lower cost than traditional cloud computing.

Q8. When will Hybrid staking be ready?

Starting with AlphaNet, Hybrid Rewards, and then Computation Layer, Hybrid Staking will be rolling out module by module. There’s a misconception that Hybrid Staking is this “one thing” — the reality is that it’s a full initiative which main purpose is to maximize token utility and staking via a staking-for-value concept in addition to the existing staking-for-return formula. Hence when value is delivered via multiple tech modules and features within the Phoenix ecosystem, hybrid staking will be available in multiple places where applicable.

The goal is to eventually have Hybrid Staking reach over 40% of total PHB supply. Combined with regular staking we are potentially looking at over 2/3 total supply staked. We foresee Hybrid Staking as the most effective way of increasing scarcity of PHB.

Q9. When it comes to technological infrastructure for AI you need to show up with a solid product whatever you can? People wants something that they can see it and or touch, what product can you introduce in that regard?

We speak through our products and technology. You can see or “touch” (through a keyboard , mouse, and internet) any of our products from Computation Layer, to AlphaNet. If you want to “touch touch”, I suggest maybe waiting for a custom GPU machine that we will likely collocate in one of our partner datacenters!

Q10. When is the new whitepaper going to be released?

Soon!

That concludes this AMA with the Phoenix development teams! Thank-you all for your submissions, and we look forward to the next one. Please join us in Telegram or Discord if you have any additional questions or clarifications on the above responses.

--

--

Phoenix
Phoenix

Written by Phoenix

Phoenix is an L1 and L2 blockchain infrastructure, empowering intelligent Web3 applications, focusing on the next generation of AI & Privacy-Enabled Web3 Apps.

No responses yet