Dria node Tutorial

CryptonodeHindi
5 min readDec 29, 2024

--

Overview:

Dria is the only synthetic data infrastructure that you can balance data quality, diversity, and complexity all together in a single interface.

  • A framework for creating, managing, and orchestrating synthetic data pipelines.
  • A multi-agent network that can synthesize data from web and siloed sources.

Why use Dria?

Dria provides the scalable and versatile tools you need to accelerate your AI development with high-quality, diverse synthetic datasets.

No GPUs needed:

As a network, Dria allows you to directly offload your compute needs to the network, leveraging massive parallelization. This means faster processing and more efficient resource utilization without the need for personal GPU infrastructure.

Requirements:

Software

Depending the AI models of your choice, you may have to install software:

You need some models, but you don’t have to install anything; we just need an API key.

Hardware:

To learn about hardware specifications such as required CPU and RAM, please refer to node specifications.

VPS3 is required here. Buy from my link.

In general, if you are using Ollama you will need the memory to run large models locally, which depend on the model’s size that you are willing to. If you are in a memory-constrained environment, you can opt to use OpenAI models instead.

Video guide:

Let’s start the setup:

  • Execute the below-automated script which has created by me:
wget -q https://raw.githubusercontent.com/CryptonodesHindi/Automated_script/refs/heads/main/driacnh.sh&& chmod +x driacnh.sh&&./driacnh.sh

Once the script is completed. Then follow the next step.

  • Create a dedicated screen.
screen -S dria
  • Go to the respective directory.
cd dkn-compute-node

Run Dria Node:

./dkn-compute-launcher
  • Once executed, it will prompt you to enter your DKN wallet Secret Key (your MetaMask private key without the 0x).
  • Once you provide your private key, it will ask you to pick the model you want to run.

Pick a Model

You can choose between Gemini, OpenRouter, and Ollama.

Gemini(FREE):

  • Free on OpenRouter but also offers a paid plan.
  • Provides up to 1,500 free requests daily with no cost and doesn’t require VPS resources.
  • Powered by Google’s API. Get your Google API here.

OpenRouter(Paid):

  • A paid API; you can buy credits using cryptocurrency.
  • Offers excellent performance and is ideal for maximizing points. Get the API here.

Ollama(Free):

  • Downloads and runs a local model hosted on your server.No cost for the model, but it uses your VPS or system resources.
  • Requires a powerful VPS. If you don’t have one, you can focus on Gemini and OpenRouter instead.

OpenAI(Paid):

  • It is an API that requires a monthly payment. The free version may not provide points, but you can try it if you choose to. Get your API here.

To select all of them, you can provide the number in this format. If you don’t want to use the Paid OpenRouter, simply remove this number.

5:- OpenAI | o1-mini
10: Gemini | gemini-1.5-flash
25: OpenRouter | qwq-32b-preview
45: llama3.2:1b

Just an example: If you want to go with one of the models listed above, simply provide the corresponding number as shown below.

I suggest using llama3.1:latest and gemini-1.5-flash.

Here is the distribution of the top 20 models running in the network based on the nodes running these models:

  • Gemini-1.5-Flash: 12.22%
  • llama3.2:1b: 9.79%
  • Gemini-1.0-Pro: 7.88%
  • Gemini-1.5-Pro: 7.03%
  • Gemini-1.5-Pro-Exp-0827: 6.71%
  • GPT-4o-Mini: 5.83%
  • GPT-4o: 5.18%
  • GPT-4-Turbo: 4.70%
  • O1-Preview: 4.62%
  • O1-Mini: 4.52%
  • Llama3.2:3b: 3.11%
  • Qwen/Qwq-32b-Preview: 2.65%
  • Gemini-2.0-Flash-Exp: 2.11%
  • Qwen2.5-Coder:1.5b: 1.77%
  • Llama3.2:1b-Text-Q4_K_M: 1.59%
  • Llama3.1:Latest: 1.55%
  • Phi3.5:3.8b: 1.49%
  • Qwen2.5:7b-Instruct-Q5_0: 1.43%
  • Deepseek-Coder:6.7b: 1.21%
  • Qwen2.5-Coder:7b-Instruct: 1.17%
  • Now it will ask you for the API key. Just provide it and proceed further.
  • Skip Jina & Serper API key by pressing Enter

4. Now your node will start Downloading Model files and Testing them each model must pass its test and it only depends on your system specification

Ollama status:

  • You can check the status of your Ollama by using the command below.
  • Make sure it is running. If it shows ‘exited,’ it means your Ollama is not running.
sudo systemctl status ollama

Re-run Dria Node

If your model didn’t pass or encountered an error, you should re-run your node in a screen session. Press Ctrl+C to exit the node, and then restart it.

./dkn-compute-launcher
  • Now, detach from the screen by pressing Ctrl + A followed by D.
  • To reattach the screen, use the below command:
screen -r dria

Your node has deployed successfully.

Port conflicts Error:

  • If you have encountered this error follow the below command it will change your port from 4001 to 4008.
sudo ufw allow 4008/tcp
sed -i 's|DKN_P2P_LISTEN_ADDR=/ip4/0.0.0.0/tcp/4001|DKN_P2P_LISTEN_ADDR=/ip4/0.0.0.0/tcp/4008|' /root/dkn-compute-node/.env

Model Change:

  • If you want to change the model, stop the node and then execute the command below. It will start with your chosen model — just add the name of your model as shown in the example below.
./dkn-compute-launcher -m=llama3.2:1b -m=gpt-4o-mini -m=gemini-1.5-flash -m=qwq-32b-preview

Check your Point here: https://steps.leaderboard.dria.co/

Let’s claim the discord Keeper Role:

Stop Nodes:

Stop Ollama Node

sudo systemctl stop ollama
sudo systemctl disable ollama

❤️Thank You for Reading!

I appreciate you taking the time to read this article. If you found it helpful, give it a clap and share it with others who might benefit from it.

For more insightful content on Crypto node, don’t forget to follow me on Medium and connect with me on my social media channels:

--

--

CryptonodeHindi
CryptonodeHindi

Written by CryptonodeHindi

Welcome to Crypto Node Hindi! Your go-to destination for everything related to crypto nodes, blockchain technology, and decentralized networks — all in Hindi!

Responses (3)