Guide:Odysee Channel/Playlist Download (1k+ compatible)

From Cyraxx Wiki - lolcow.city
Jump to navigation Jump to search

Using .sh Script to Download Video Links from an Odysee/LBRY Channel or Playlist

This tutorial explains how to use two Bash scripts to download a list of video links from an Odysee/LBRY channel or playlist, including handling the API's limit of 1000 video links. The scripts work on macOS, Windows (with a Bash shell), and Linux.

Script 1: Fetch Initial 1000 Video Links

The first script fetches up to 1000 video links from a specified Odysee/LBRY channel over multiple pages and saves them to a file.

#!/bin/bash

# Define the output file
OUTPUT_FILE="videos.txt"
# Clear the output file if it exists
> $OUTPUT_FILE

# Maximum number of retries
MAX_RETRIES=3

# Loop through pages 1 to 20
for PAGE in {1..20}
do
  echo "Fetching page $PAGE"

  # Initialize retry counter
  RETRIES=0

  while [ $RETRIES -lt $MAX_RETRIES ]; do
    # Execute the curl command and extract the video URLs
    RESPONSE=$(curl --location --request POST 'https://api.lbry.tv/api/v1/proxy' \
    --header 'Content-Type: application/json' \
    --data-raw '{
      "method": "claim_search",
      "params": {
        "channel": "@UnofficialCyraxArchive:b",
        "order_by": "release_time",
        "page": '"$PAGE"',
        "page_size": 50
      }
    }' --max-time 10) # Timeout after 10 seconds if no response

    # Check if curl succeeded
    if [ $? -eq 0 ]; then
      echo "$RESPONSE" > response_$PAGE.json
      URLS=$(echo "$RESPONSE" | jq -r '.result.items[].canonical_url')

      if [ -n "$URLS" ]; then
        echo "$URLS" >> $OUTPUT_FILE
        echo "Page $PAGE processed successfully"
        break
      else
        echo "No URLs found in the response for page $PAGE"
      }
    } else {
      echo "Curl command failed for page $PAGE, retrying... ($((RETRIES+1))/$MAX_RETRIES)"
    fi
    
    # Increment retry counter
    RETRIES=$((RETRIES + 1))
  done

  # If maximum retries reached, print an error message
  if [ $RETRIES -eq $MAX_RETRIES ]; then
    echo "Failed to fetch page $PAGE after $MAX_RETRIES retries, skipping..."
  fi
done

echo "Data fetch complete. Video URLs saved in $OUTPUT_FILE"

Prerequisites

  • Bash Shell: The script runs in a Bash environment.
  • cURL: Command-line tool for transferring data with URLs.
  • jq: Command-line JSON processor.

Steps to Use the Script

On macOS

  1. Open Terminal: You can find it in Applications > Utilities > Terminal.
  2. Install Homebrew: If you don't have Homebrew installed, you can install it by running:
   /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
  1. Install jq:
   brew install jq
  1. Create the Script File:
   nano download_videos.sh
  1. Copy and Paste the Script: Copy the provided script into `nano` and save it by pressing `CTRL + X`, then `Y`, and `Enter`.
  2. Make the Script Executable:
   chmod +x download_videos.sh
  1. Run the Script:
   ./download_videos.sh

On Windows

  1. Install Windows Subsystem for Linux (WSL): Follow the instructions from the official Microsoft guide.
  2. Install Ubuntu from Microsoft Store: This provides a Bash shell.
  3. Open Ubuntu: Launch the Ubuntu app.
  4. Install cURL and jq:
   sudo apt update
   sudo apt install curl jq
  1. Create the Script File:
   nano download_videos.sh
  1. Copy and Paste the Script: Copy the provided script into `nano` and save it by pressing `CTRL + X`, then `Y`, and `Enter`.
  2. Make the Script Executable:
   chmod +x download_videos.sh
  1. Run the Script:
   ./download_videos.sh

On Linux

  1. Open Terminal.
  2. Install cURL and jq: Use your package manager to install cURL and jq. For example, on Debian-based systems:
   sudo apt update
   sudo apt install curl jq
  1. Create the Script File:
   nano download_videos.sh
  1. Copy and Paste the Script: Copy the provided script into `nano` and save it by pressing `CTRL + X`, then `Y`, and `Enter`.
  2. Make the Script Executable:
   chmod +x download_videos.sh
  1. Run the Script:
   ./download_videos.sh

Fetching the Release Time of the Last Video

After running the first script, you need to find the `release_time` of the last video on page 50. Use the following `cURL` command:

curl --location --request POST 'https://api.lbry.tv/api/v1/proxy' \
--header 'Content-Type: application/json' \
--data-raw '{
  "method": "claim_search",
  "params": {
    "channel": "@Channel:A",
    "order_by": "release_time",
    "page": 50,
    "page_size": 20
  }
}'

Look for the `release_time` value in the response. For example:

"release_time": "1675914909"

You can use an online tool like CurlerRoo to manually get the `release_time`.

Script 2: Fetch the Next 1000 Video Links

Use the `release_time` value obtained to fetch the next set of video links. Replace `<RELEASE_TIME>` in the script with the actual `release_time` value.

#!/bin/bash

# Define the output file
OUTPUT_FILE="videos.txt"
# Clear the output file if it exists
> $OUTPUT_FILE

# Maximum number of retries
MAX_RETRIES=3

# Loop through pages 1 to 20
for PAGE in {1..20}
do
  echo "Fetching page $PAGE"

  # Initialize retry counter
  RETRIES=0

  while [ $RETRIES -lt $MAX_RETRIES ]; do
    # Execute the curl command and extract the video URLs
    RESPONSE=$(curl --location --request POST 'https://api.lbry.tv/api/v1/proxy' \
    --header 'Content-Type: application/json' \
    --data-raw '{
      "method": "claim_search",
      "params": {
        "channel": "@UnofficialCyraxArchive:b",
        "order_by": "release_time",
        "release_time": "<=1675914909",
        "page": '"$PAGE"',
        "page_size": 50
      }
    }' --max-time 10) # Timeout after 10 seconds if no response

    # Check if curl succeeded
    if [ $? -eq 0 ]; then
      echo "$RESPONSE" > response_$PAGE.json
      URLS=$(echo "$RESPONSE" | jq -r '.result.items[].canonical_url')

      if [ -n "$URLS" ]; then
        echo "$URLS" >> $OUTPUT_FILE
        echo "Page $PAGE processed successfully"
        break
      else
        echo "No URLs found in the response for page $PAGE"
      }
    else
      echo "Curl command failed for page $PAGE, retrying... ($((RETRIES+1))/$MAX_RETRIES)"
    fi
    
    # Increment retry counter
    RETRIES=$((RETRIES + 1))
  done

  # If maximum retries reached, print an error message
  if [ $RETRIES -eq $MAX_RETRIES ]; then
    echo "Failed to fetch page $PAGE after $MAX_RETRIES retries, skipping..."
  fi
done

echo "Data fetch complete. Video URLs saved in $OUTPUT_FILE"

Troubleshooting

  • Permission Denied: If you encounter a permission denied error, ensure the script has execute permissions (`chmod +x download_more_videos.sh`).
  • Command Not Found: If `curl` or `jq` commands are not found, ensure they are installed correctly.
  • Network Issues: Ensure you have a stable internet connection, as the script relies on network requests.

Using yt-dlp to Download Videos Using a List of URLs

This part explains how to use `yt-dlp` to download videos from a list of URLs. The command provided will force the use of IPv4, read URLs from a list, and keep track of downloaded videos to avoid duplicates.

Prerequisites

  • yt-dlp: A command-line program to download videos from YouTube and other video platforms.
  • A text file with a list of URLs: The file should contain one URL per line.
  • A text file to keep track of downloaded videos: This file will be used to prevent downloading the same video more than once.

Installing yt-dlp

On macOS

  1. Open Terminal: You can find it in Applications > Utilities > Terminal.
  2. Install yt-dlp:
   brew install yt-dlp

On Windows

  1. Open Command Prompt or PowerShell.
  2. Install yt-dlp:
   pip install yt-dlp
  1. If `pip` is not installed, install it by downloading `get-pip.py` from https://bootstrap.pypa.io/get-pip.py and running:
   python get-pip.py

On Linux

  1. Open Terminal.
  2. Install yt-dlp:
   sudo apt update
   sudo apt install yt-dlp
  1. Alternatively, you can use `pip`:
   pip install yt-dlp

Preparing the List of URLs

1. Create a text file named `list.txt`:

   nano list.txt

2. Add URLs to the file: Each line should contain one video URL. Save and close the file.

Preparing the Download Archive File

1. Create a text file named `myarchive.txt`:

   touch myarchive.txt

Using the Command

To download videos from the list of URLs, use the following command:

yt-dlp --force-ipv4 -a list.txt --download-archive myarchive.txt

Command Explanation

  • `yt-dlp`: The command to run yt-dlp.
  • `--force-ipv4`: Forces the use of IPv4.
  • `-a list.txt`: Specifies the input file (`list.txt`) that contains the list of video URLs.
  • `--download-archive myarchive.txt`: Specifies the archive file (`myarchive.txt`) to keep track of downloaded videos.

Running the Command

On macOS

  1. Open Terminal.
  2. Navigate to the directory containing `list.txt` and `myarchive.txt`:
   cd path/to/your/files
  1. Run the command:
   yt-dlp --force-ipv4 -a list.txt --download-archive myarchive.txt

On Windows

  1. Open Command Prompt or PowerShell.
  2. Navigate to the directory containing `list.txt` and `myarchive.txt`:
   cd path\to\your\files
  1. Run the command:
   yt-dlp --force-ipv4 -a list.txt --download-archive myarchive.txt

On Linux

  1. Open Terminal.
  2. Navigate to the directory containing `list.txt` and `myarchive.txt`:
   cd path/to/your/files
  1. Run the command:
   yt-dlp --force-ipv4 -a list.txt --download-archive myarchive.txt

Troubleshooting

  • Permission Denied: If you encounter a permission denied error, ensure you have the necessary permissions to execute the command.
  • Command Not Found: Ensure `yt-dlp` is installed correctly and is in your system's PATH.
  • Network Issues: Ensure you have a stable internet connection, as the command relies on network requests.