Raven Protocol (RAVEN) Deep-learning - ICO Review, Rating and Details of Raven Protocol ICO (Token Sale) - Rating: Gamble

Raven Protocol is a decentralized and distributed deep-learning training protocol which provides cost-efficient and faster training of deep neural networks by utilizing the compute resources in the network.

Problems & Solutions

The Problem: 

  • Expensive Servers: Deep Learning is an expensive and time-consuming process. With engineer salaries shooting above 150k USD, even the maintenance and running of such intelligent algorithms are expensive. Training a simpler 19 layer Convolutional Neural Network on a million images (Imagenet) usually takes around 2 weeks of continuous training. This translates to 336 hours on an ec2 instance @2.6 USD per hour costing roughly around 1000 USD. This is just one trained model. In this industry, one million images is a paltry figure, because there are millions of different items on which image recognition needs to automate tasks. Moreover, these models needs to be maintained constantly at least once in a week to keep the models updated. Image classification is only one such case amongst hundreds of different ways where deep learning is applied, to transform and disrupt existing workflows and businesses.

  • Training Speed: GP-GPU implementation to train the deep neural nets is one of the best features that has driven change in the overall AI ecosystem, thanks to Andrew Ng. The implementation single-handedly pushed the overall development by several years in exponentially accelerating the training of deep neural nets. NVIDIA’s soaring shares is one of the biggest testament to this. NVIDIA itself is pushing the boundaries by constant development and various other programs to support the ecosystem. Training deep neural networks on GPUs is still a localized way of training the network and can only scale so much given the limitation in the number of cores that can be fitted inside a block of a single GPU chip. We owe GPUs a lot of credit to the success and the mass scale adoption of AI today but, there is a need to match the computational acceleration used in training models with the rapid development of AI systems. Historically, proliferation of information was achieved with a network of computers, called the internet and cloud today. There’s a need to speed up the training process for companies to be able to quickly provide personalized experience rather than pooling it for later; and make it cost-efficient for them.

The Solution:

The Raven Protocol distributes heavy deep learning training in the ecosystem using blockchain and incentivizes those who contribute their compute resources in exchange for Raven Tokens, by introducing a new protocol backed by a tested deep learning training distribution algorithm.

  • Raven Protocol provides a channel for distribution of the training of deep learning algorithms. The contributors to the network share their compute resources to enable training for users in the network and are in turn rewarded with Raven (RAVEN) coins/tokens, through the blockchain. Currently, those who enable distributed AI training in the market today, either train the entire deep neural net architecture on a single node or split the architecture into smaller segments to be then trained on servers that have ample heavy-compute power. Raven Protocol, in contrast, enables a single deep learning architecture to be split up to the micro level and trained across the many dynamically allocated nodes within the Raven network — the first truly distributed training algorithm in the world.

  • With a distributed network of computing resources, Raven Protocol creates a platform to connect all AI services together, bringing together the larger AI community as well. Raven Protocol is the central hub for performing AI training and development. The platform will integrate AI services such as data exchanges, data labeling/annotation services, shared models, and implemented algorithms that may be developed by other projects or members of the AI community. All integrations with the platform will be supported by the Raven Protocol and allow for seamless communication between services.

  • Raven Protocol opens up a platform for individuals to contribute idle compute power to the network; adding value by aiding AI models’ training. They are then compensated by those who use the resources, which are shared using your own devices such as phones/laptops, or any other. With the correct incentivisation in place, AI training can be accessible to the the masses either for free or at deep discounted rates; this would undoubtedly push AI technology to levels unimaginable at present.

Features & Highlights

A Requester can request Raven Protocol to train a customized deep learning model. Once the request has been registered it gets queued in the Request Queue. Sequentially, each request is processed and goes through the Deep Learning Algorithm Selector, where an algorithm best suited for the incoming request is selected. Once the algorithm has been selected, it is passed to the Distribution Server. At the Distribution Server, a list of matrix calculation is prepared to calculate the gradients. Moreover the Distribution Server also calls for the batches (Bi) of the supplied dataset (DSi) through the Calculation Queue. The compute nodes (Ni) in the network are registered at the Distribution Server from where calculations are assigned to different nodes subject to the Calculation Queue. Different nodes return the results of different calculation in the form of gradients and it is registered at the Gradient Collector. The Gradient Collector and the Calculation Queue work in tandem, and calculations for different layers of a deep neural net is updated in the Calculation Queue seeking the status and results from the Gradient Collector via the Distribution Server. The result of the process is a trained model with weights processed, which can then be exposed for the usage. Following are few details related to incentivisation and data transfer:

  • Proof of Calculation
    • Proof of calculation will be the primary guideline for the regulation and distribution of incentives to the compute nodes (Ni ) in the network. Following are the two prime deciders for the incentive distribution:
      • Speed: Depending upon how fast a node can calculate the gradients and return it back to the Gradient Collector.
      • Redundancy: The 3 fastest redundant calculation will only qualify for
        receiving the incentive. This will make sure that the gradients that are getting returned are genuine and of highest quality.
  • Compute Usage
    • Crowd-sourced compute resource gathering and allocation, is key to Raven Protocol and is subject to the following criteria:
      • Permissions: Those participating by lending their systems as compute nodes (Ni) for training models, are requested explicit permission by Raven. Such awareness is created among users to differ from companies that have, in the past, taken advantage of user-ignorance. We have clear focus in being transparent and explicit on how much of the user’s resources will be utilised by our network.
      • CPU/GPU allocation: Allocating CPU/GPU resources are optional categories at the contributor’s side. The allocation is also assigned more value in incentivisation as it aids training DL networks.
      • Throttling: Resources allocated are calculated in percentages and those percentages can be throttled and controlled from the user’s end. The number of calculations that are performed are in proportion with the amount of compute thus available. Allocation of resources are simultaneously registered within the blockchain, and incentivised accordingly. For example, if a user allows the network to use 5% of their computing resource, then the network would send 10 calculations per second, whereas if the user allows the network to use 10%, then the network will send 20 calculations per second.

Token Allocation

"Public Sale","Private Sale","Strategic Sale","Seed Sale","Team","Ecosystem" {"name": "Public Sale","value": 3},{"name": "Private Sale","value": 14.2},{"name": "Strategic Sale","value": 18.735},{"name": "Seed Sale","value": 4.065},{"name": "Team","value": 25},{"name": "Ecosystem","value": 35}

Token Release Schedule

"Jun 2019", "Jul 2019", "Aug 2019", "Sep 2019", "Oct 2019", "Nov 2019", "Dec 2019", "Jan 2020", "Feb 2020", "Mar 2020", "Apr 2020", "May 2020", "Jun 2020", "Jul 2020", "Aug 2020", "Sep 2020", "Oct 2020", "Nov 2020", "Dec 2020", "Jan 2021", "Feb 2021", "Mar 2021", "Apr 2021", "May 2021", "Jun 2021", "Jul 2021", "Aug 2021", "Sep 2021", "Oct 2021", "Nov 2021", "Dec 2021", "Jan 2022", "Feb 2022", "Mar 2022", "Apr 2022", "May 2022" {"name": "Public Sale", "icon": "roundRect"},{"name": "Public Sale", "icon": "roundRect"},{"name": "Strategic Sale", "icon": "roundRect"},{"name": "Seed Sale", "icon": "roundRect"},{"name": "Team", "icon": "roundRect"},{"name": "Ecosystem", "icon": "roundRect"} {"data": [3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00, 3.00],"name": "Public Sale", "symbol": "circle", "type": "line", "stack": "Save", "areaStyle": {}},{"data": [5.68, 5.68, 5.68, 9.94, 9.94, 9.94, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20, 14.20],"name": "Public Sale", "symbol": "circle", "type": "line", "stack": "Save", "areaStyle": {}},{"data": [7.494, 7.494, 7.494, 13.114, 13.114, 13.114, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735, 18.735],"name": "Strategic Sale", "symbol": "circle", "type": "line", "stack": "Save", "areaStyle": {}},{"data": [1.626, 1.626, 1.626, 2.846, 2.846, 2.846, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065, 4.065],"name": "Seed Sale", "symbol": "circle", "type": "line", "stack": "Save", "areaStyle": {}},{"data": [0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 0.00, 1.042, 2.084, 3.126, 4.168, 5.210, 6.252, 7.294, 8.336, 9.378, 10.420, 11.462, 12.504, 13.546, 14.588, 15.630, 16.672, 17.714, 18.756, 19.798, 20.840, 21.882, 22.924, 23.966, 25.000],"name": "Team", "symbol": "circle", "type": "line", "stack": "Save", "areaStyle": {}},{"data": [0.00, 1.458, 2.916, 4.374, 5.832, 7.290, 8.748, 10.206, 11.664, 13.122, 14.580, 16.038, 17.496, 18.954, 20.412, 21.870, 23.328, 24.786, 26.244, 27.702, 29.160, 30.618, 32.076, 33.534, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000, 35.000],"name": "Ecosystem", "symbol": "circle", "type": "line", "stack": "Save", "areaStyle": {}}

Token Allocation & Release Note

  • Public Sale: 3% of total supply
    • No lockup
  • Private Sale: 14.2% of total supply 
    • 40% at TGE, 30% after 3 months, 30% after 6 months
  • Strategic Sale: 18.735% of total supply
    • 40% at TGE, 30% after 3 months, 30% after 6 months
  • Seed Sale: 4.065% of total supply
    • 40% at TGE, 30% after 3 months, 30% after 6 months
  • Team: 25% of total supply
    • 1 year lockup, then monthly vesting in 2 years
  • Ecosystem: 35% of total supply
    • 1 month lockup, then monthly vesting in 2 years

Token Utility & Use Cases

The RAVEN token will be the means to kick off all the activity in the Raven Protocol ecosystem. An AI-developer who wants to perform a training run on 1M images will need to send over RAVEN tokens to a smart contract. Contributors to the network who provide compute nodes will be rewarded with RAVEN tokens upon the successful training run, and is released via smart contract upon data and calculation verification.

Contributors running the compute nodes are ideally heavily involved in the AI community to drive forward progress in the technology. Thus, they will have a need to use those RAVEN tokens they’ve been rewarded to do AI training as well. The substantial value of tokens held in the blockchain gains more value as they are used in training other models. If the system is left to run on one node alone, the compute power will not be sufficient to conduct the training. The system is sustained with the use of subsequent nodes throughout the blockchain. Thus, the flow of RAVEN tokens will be a complete circle and continue to be utilized inside of the network.

RAVEN tokens will also be used on the marketplace/platform to utilize additional services on top of the distributed AI training. Acquiring datasets, annotating data, using someone else’s trained model, running an algorithm developed by the community, etc., can be accomplished by using RAVEN tokens.

Roadmap & Updates

  • Q4 2017: Inception
    • Cloud costs were shooting high for co-founders who were doing Ai training on different projects.
  • Q1 2018: POC
    • A basic Javascript based deep learning framework built.
    • 7 layer CNN trained on MNIST.
  • Q2 2018: Whitepaper V1.0
    • A comprehensive Whitepaper describing Raven ecosystem and its capabilities when launched.
  • Q3 2018: Structuring Framework
    • Structuring the framework and preparing to open it up for the beta phase development as an open source framework.
  • Q4 2018: BUIDL Mode
    • Raised a Seed Round in Sept 2018 after proving to ourselves that a truly distributed and scalable solution to AI training was possible.
  • Q1 2019: Customer Development
    • Talking closely with potential customers to understand their requirements and validating that they would be a beta customers.
  • Q2 2019: Community Building
    • Everyone AI + Blockchain need to know about us. We started becoming recognized as leaders in the industry.
  • Q3 2019
    • Open-Source Development: Building a repository for open-sourced development with developers from various parts of the globe.
    • Function Implementation: Completing basic calculus and statistical functions to be able to support advanced calculations.
  • Q4 2019:
    • RNN Unit Implementation: Adding RNN Unit to be able to support LSTMs and GRUs and other advanced modelling like Video classification.
    • Convolution Layer Implementation: Adding Convolution function to be able to support advanced Image and Text.
  • Q1 2020: Blockchain Implementation
    • Testing the framework on a blockchain Testnet with a selected few companies.
    • Private Beta Launch
  • Q2 2020: Pubic Beta Launch

Social Media Links

Twitter Updates

  • RT @autosultan87: Thankyou $Raven Community and @raven_protocol @TheRavenKnights @DreamscapeProj1 https://t.co/9U99Gbo2Wa
  • Happy 2-years to @binance and happy 1-month birthday to the world's first #IDO on June 17th. We made history and it… https://t.co/sNAgDA2lBh
  • RT @NeviNevs: When @raven_protocol sais there's something BIG in the Pipeline 🦅😋💥🔥🔥🚀🚀🚀 https://t.co/56tXvisGUf
  • RT @TheRavenKnights: You currently have 3 options to earn free @raven_protocol tokens🤑 👇 💲#Giveaway competition (10k $RAVEN) 👉 https://t.co…
  • RT @FadeevBorys: @raven_protocol #RavenFuture https://t.co/XwdDjvY2rp

Medium Updates