Thursday, July 3, 2025

MySQL Analysis: With an AI-Powered CLI Tool

MySQL Analysis: With an AI-Powered CLI Tool

As DBAs with MySQL we often live on a Linux terminal window. We also enjoy free options when available. This post shows an approach that allows us to stay on our terminal window and still use an AI-powered tool. You can update to use other direct AI providers but I set this example up to use aimlapi.com as it brings multiple AI models to your terminal for free with limited use or very low cost for more testing.

Note: I'm not a paid spokesperson for AIMLAPI or anything - this is just an easy example to highlight the idea.

The Problem

You're looking at a legacy database with hundreds of tables, each with complex relationships and questionable design decisions made years ago. The usual process involves:

  • Manual schema inspection
  • Cross-referencing documentation (if it exists)
  • Running multiple EXPLAIN queries
  • Consulting best practice guides
  • Seeking second opinions from colleagues

This takes time and you often miss things.

A CLI-Based Approach

We can take advantage of AI directly from our CLI and do numerous things. Helping with MySQL analysis is just one example of how this approach can work with our daily database tasks. By combining MySQL's native capabilities with AI models, all accessible through a simple command-line interface, we can get insights without leaving our terminal. AIMLAPI provides free access to over 100 AI models with limited use, making this approach accessible. For heavier testing, the costs remain very reasonable.

The Tool: AIMLAPI CLI

So here's a bash script that provides access to 100+ AI models through a single interface:

#!/bin/bash
# AIMLAPI CLI tool with access to 100+ AI models
# File: ~/.local/bin/aiml

# Configuration
DEFAULT_MODEL=${AIMLAPI_DEFAULT_MODEL:-"gpt-4o"}
MAX_TOKENS=${AIMLAPI_MAX_TOKENS:-2000}
TEMPERATURE=${AIMLAPI_TEMPERATURE:-0.7}
BASE_URL="https://api.aimlapi.com"
ENDPOINT="v1/chat/completions"

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
PURPLE='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color

# Function to print colored output
print_info() { echo -e "${BLUE}[INFO]${NC} $1"; }
print_success() { echo -e "${GREEN}[SUCCESS]${NC} $1"; }
print_warning() { echo -e "${YELLOW}[WARNING]${NC} $1"; }
print_error() { echo -e "${RED}[ERROR]${NC} $1"; }
print_model() { echo -e "${PURPLE}[MODEL]${NC} $1"; }

# Popular model shortcuts
declare -A MODEL_SHORTCUTS=(
    # OpenAI Models
    ["gpt4"]="gpt-4o"
    ["gpt4o"]="gpt-4o"
    ["gpt4mini"]="gpt-4o-mini"
    ["o1"]="o1-preview"
    ["o3"]="openai/o3-2025-04-16"
    
    # Claude Models  
    ["claude"]="claude-3-5-sonnet-20241022"
    ["claude4"]="anthropic/claude-sonnet-4"
    ["opus"]="claude-3-opus-20240229"
    ["haiku"]="claude-3-5-haiku-20241022"
    ["sonnet"]="claude-3-5-sonnet-20241022"
    
    # DeepSeek Models
    ["deepseek"]="deepseek-chat"
    ["deepseek-r1"]="deepseek/deepseek-r1"
    ["reasoner"]="deepseek-reasoner"
    
    # Google Models
    ["gemini"]="gemini-2.0-flash"
    ["gemini2"]="gemini-2.0-flash"
    ["gemini15"]="gemini-1.5-pro"
    
    # Meta Llama Models
    ["llama"]="meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo"
    ["llama405b"]="meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo"
    
    # Qwen Models
    ["qwen"]="qwen-max"
    ["qwq"]="Qwen/QwQ-32B"
    
    # Grok Models
    ["grok"]="x-ai/grok-beta"
    ["grok3"]="x-ai/grok-3-beta"
    
    # Specialized Models
    ["coder"]="Qwen/Qwen2.5-Coder-32B-Instruct"
)

# Function to resolve model shortcuts
resolve_model() {
    local model="$1"
    if [[ -n "${MODEL_SHORTCUTS[$model]}" ]]; then
        echo "${MODEL_SHORTCUTS[$model]}"
    else
        echo "$model"
    fi
}

# Function to create JSON payload using jq for proper escaping
create_json_payload() {
    local model="$1"
    local prompt="$2"
    local system_prompt="$3"
    
    local temp_file=$(mktemp)
    echo "$prompt" > "$temp_file"
    
    if [ -n "$system_prompt" ]; then
        jq -n --arg model "$model" \
              --rawfile prompt "$temp_file" \
              --arg system "$system_prompt" \
              --argjson max_tokens "$MAX_TOKENS" \
              --argjson temperature "$TEMPERATURE" \
              '{
                model: $model,
                messages: [{role: "system", content: $system}, {role: "user", content: $prompt}],
                max_tokens: $max_tokens,
                temperature: $temperature
              }'
    else
        jq -n --arg model "$model" \
              --rawfile prompt "$temp_file" \
              --argjson max_tokens "$MAX_TOKENS" \
              --argjson temperature "$TEMPERATURE" \
              '{
                model: $model,
                messages: [{role: "user", content: $prompt}],
                max_tokens: $max_tokens,
                temperature: $temperature
              }'
    fi
    
    rm -f "$temp_file"
}

# Function to call AIMLAPI
call_aimlapi() {
    local prompt="$1"
    local model="$2"
    local system_prompt="$3"
    
    if [ -z "$AIMLAPI_API_KEY" ]; then
        print_error "AIMLAPI_API_KEY not set"
        return 1
    fi
    
    model=$(resolve_model "$model")
    
    local json_file=$(mktemp)
    create_json_payload "$model" "$prompt" "$system_prompt" > "$json_file"
    
    local response_file=$(mktemp)
    local http_code=$(curl -s -w "%{http_code}" -X POST "${BASE_URL}/${ENDPOINT}" \
        -H "Content-Type: application/json" \
        -H "Authorization: Bearer $AIMLAPI_API_KEY" \
        --data-binary @"$json_file" \
        -o "$response_file")
    
    if [ "$http_code" -ne 200 ] && [ "$http_code" -ne 201 ]; then
        print_error "HTTP Error $http_code"
        cat "$response_file" >&2
        rm -f "$json_file" "$response_file"
        return 1
    fi
    
    local content=$(jq -r '.choices[0].message.content // empty' "$response_file" 2>/dev/null)
    
    if [ -z "$content" ]; then
        content=$(jq -r '.choices[0].text // .message.content // .content // empty' "$response_file" 2>/dev/null)
    fi
    
    if [ -z "$content" ]; then
        local error_msg=$(jq -r '.error.message // .error // empty' "$response_file" 2>/dev/null)
        if [ -n "$error_msg" ]; then
            echo "API Error: $error_msg"
        else
            echo "Error: Unable to parse response from API"
        fi
    else
        echo "$content"
    fi
    
    rm -f "$json_file" "$response_file"
}

# Main function with argument parsing
main() {
    local model="$DEFAULT_MODEL"
    local system_prompt=""
    local prompt=""
    local piped_input=""
    
    if [ -p /dev/stdin ]; then
        piped_input=$(cat)
    fi
    
    # Parse arguments
    while [[ $# -gt 0 ]]; do
        case $1 in
            -m|--model)
                model="$2"
                shift 2
                ;;
            -s|--system)
                system_prompt="$2"
                shift 2
                ;;
            *)
                prompt="$*"
                break
                ;;
        esac
    done
    
    # Handle input
    if [ -n "$piped_input" ] && [ -n "$prompt" ]; then
        prompt="$prompt

Here is the data to analyze:
$piped_input"
    elif [ -n "$piped_input" ]; then
        prompt="Please analyze this data:

$piped_input"
    elif [ -z "$prompt" ]; then
        echo "Usage: aiml [options] \"prompt\""
        echo "       command | aiml [options]"
        exit 1
    fi
    
    local resolved_model=$(resolve_model "$model")
    print_info "Querying $resolved_model..."
    
    local response=$(call_aimlapi "$prompt" "$model" "$system_prompt")
    
    echo ""
    print_model "Response from $resolved_model:"
    echo "----------------------------------------"
    echo "$response" 
    echo "----------------------------------------"
}

# Check dependencies
check_dependencies() {
    command -v curl >/dev/null 2>&1 || { print_error "curl required but not installed."; exit 1; }
    command -v jq >/dev/null 2>&1 || { print_error "jq required but not installed."; exit 1; }
}

check_dependencies
main "$@"

This script provides access to various AI models through simple shortcuts like claude4, gpt4, grok3, etc. AIMLAPI offers free access with limited use to all these models, with reasonable costs for additional testing. Good for DBAs who want to experiment without breaking the budget.

Script Features

The script includes comprehensive help. Here's what aiml --help shows:

AIMLAPI CLI Tool - Access to 100+ AI Models
==============================================
Usage: aiml [OPTIONS] "prompt"
       command | aiml [OPTIONS]
Core Options:
  -m, --model MODEL         Model to use (default: gpt-4o)
  -t, --tokens NUMBER       Max tokens (default: 2000)
  -T, --temperature FLOAT   Temperature 0.0-2.0 (default: 0.7)
  -s, --system PROMPT       System prompt for model behavior
Input/Output Options:
  -f, --file FILE           Read prompt from file
  -o, --output FILE         Save response to file
  -r, --raw                 Raw output (no formatting/colors)
Information Options:
  -l, --list               List popular model shortcuts
  --get-models             Fetch all available models from API
  -c, --config             Show current configuration
  -v, --verbose            Enable verbose output
  -d, --debug              Show debug information
  -h, --help               Show this help
Basic Examples:
  aiml "explain quantum computing"
  aiml -m claude "review this code"
  aiml -m deepseek-r1 "solve this math problem step by step"
  aiml -m grok3 "what are the latest AI developments?"
  aiml -m coder "optimize this Python function"
Pipe Examples:
  ps aux | aiml "analyze these processes"
  netstat -tuln | aiml "explain these network connections"
  cat error.log | aiml -m claude "diagnose these errors"
  git diff | aiml -m coder "review these code changes"
  df -h | aiml "analyze disk usage and suggest cleanup"
File Operations:
  aiml -f prompt.txt -o response.txt
  aiml -f large_dataset.csv -m llama405b "analyze this data"
  cat script.py | aiml -m coder -o review.md "code review"
Model Categories & Shortcuts:
  OpenAI:     gpt4, gpt4mini, o1, o3
  Claude:     claude, opus, haiku, sonnet, claude4
  DeepSeek:   deepseek, deepseek-r1, reasoner
  Google:     gemini, gemini2, gemma
  Meta:       llama, llama3, llama4, llama405b
  Qwen:       qwen, qwen2, qwq
  Grok:       grok, grok3, grok3mini
  Coding:     coder, codestral
Advanced Usage:
  aiml -m claude -s "You are a security expert" "audit this code"
  aiml -m deepseek-r1 -t 3000 "complex reasoning task"
  aiml -v -m grok3 "verbose query with detailed logging"
  aiml -d "debug mode to troubleshoot API issues"
Model Discovery:
  aiml -l                   # Show popular shortcuts
  aiml --get-models         # Fetch all available models from API
  aiml --config             # Show current configuration
Environment Variables:
  AIMLAPI_API_KEY          - Your AIMLAPI key (required)
  AIMLAPI_DEFAULT_MODEL    - Default model (optional)
  AIMLAPI_MAX_TOKENS       - Default max tokens (optional)
  AIMLAPI_TEMPERATURE      - Default temperature (optional)
Pro Tips:
  • Use coder for programming tasks and code reviews
  • Use deepseek-r1 for complex reasoning and math problems
  • Use claude4 for detailed analysis and long-form content
  • Use grok3 for current events and real-time information
  • Use gpt4mini for quick questions to save on API costs
  • Pipe command output directly: command | aiml "analyze this"
  • Use -v for verbose output to see what model is being used
  • Use --get-models to see all 100+ available models

Access to 100+ AI models through one simple interface!

Example: The City Table

Here's how this works with an actual MySQL table analysis. I'll analyze a City table from the classic World database (from https://dev.mysql.com/doc/index-other.html Example Databases) using three different AI models.

The Command

mysql --login-path=klarson world -e "show create table City\G" | \
aiml --model claude4 "Using a MySQL expert point of view analyze this table"

This command:

  1. Extracts the table structure from MySQL
  2. Pipes it to our AI tool
  3. Gets analysis from Claude Sonnet 4

Results

Claude Sonnet 4 Analysis

Claude 4 provided a well-organized analysis:

Strengths:

  • Proper AUTO_INCREMENT primary key for InnoDB efficiency
  • Foreign key constraints maintaining referential integrity
  • Appropriate indexing strategy for common queries

Issues Found:

  • Storage Inefficiency: Using CHAR(35) for variable-length city names wastes space
  • Character Set Limitation: latin1 charset inadequate for international city names
  • Suboptimal Indexing: name_key index only covers first 5 characters

Suggested Improvements:

-- Claude's suggested optimized structure
CREATE TABLE `City` (
  `ID` int NOT NULL AUTO_INCREMENT,
  `Name` VARCHAR(35) NOT NULL,
  `CountryCode` CHAR(3) NOT NULL,
  `District` VARCHAR(20) NOT NULL,
  `Population` int UNSIGNED NOT NULL DEFAULT '0',
  PRIMARY KEY (`ID`),
  KEY `CountryCode` (`CountryCode`),
  KEY `name_idx` (`Name`),
  KEY `country_name_idx` (`CountryCode`, `Name`),
  KEY `population_idx` (`Population`),
  CONSTRAINT `city_ibfk_1` FOREIGN KEY (`CountryCode`) 
    REFERENCES `Country` (`Code`) ON DELETE RESTRICT ON UPDATE CASCADE
) ENGINE=InnoDB AUTO_INCREMENT=4080 
  DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

Grok 3 Beta Analysis (The Comprehensive Reviewer)

mysql --login-path=klarson world -e "show create table City\G" | \
aiml --model grok3 "Using a MySQL expert point of view analyze this table"

Grok 3 provided an exhaustive, detailed analysis covering:

Technical Deep Dive:

  • Performance Impact Analysis: Evaluated the partial index limitation in detail
  • Storage Engine Benefits: Confirmed InnoDB choice for transactional integrity
  • Data Type Optimization: Detailed space-saving recommendations with examples

Advanced Considerations:

  • Full-text indexing recommendations for city name searches
  • Character set migration procedures with specific commands
  • Partitioning strategies for large datasets

Implementation Guidelines:

-- Grok's character set migration suggestion
ALTER TABLE City CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;

-- Full-text index recommendation
ALTER TABLE City ADD FULLTEXT INDEX name_fulltext (Name);

GPT-4o Analysis (The Practical Advisor)

mysql --login-path=klarson world -e "show create table City\G" | \
aiml --model gpt4 "Using a MySQL expert point of view analyze this table"

GPT-4o focused on practical, immediately actionable improvements:

Pragmatic Assessment:

  • Validated the AUTO_INCREMENT primary key design
  • Confirmed foreign key constraint benefits for data integrity
  • Identified character set limitations for global applications

Ready-to-Implement Suggestions:

  • Specific ALTER TABLE commands for immediate optimization
  • Query pattern analysis recommendations
  • Index effectiveness evaluation criteria

The Power of Multi-Model Analysis

What makes this approach valuable is getting three distinct perspectives:

  1. Claude 4: Provides detailed, structured analysis with concrete code solutions
  2. Grok 3: Offers comprehensive coverage with advanced optimization strategies
  3. GPT-4o: Delivers practical, immediately actionable recommendations

Each model brings unique strengths:

  • Different focal points: Storage optimization vs. performance vs. maintainability
  • Varying depth levels: From quick wins to architectural improvements
  • Diverse analysis styles: Structured vs. comprehensive vs. practical

Implementing the Workflow

Setup Instructions

1. Install Dependencies:

# Install required tools
sudo apt install curl jq mysql-client

# Create the script directory
mkdir -p ~/.local/bin

# Make script executable
chmod +x ~/.local/bin/aiml

2. Configure API Access:

# Get your free AIMLAPI key from https://aimlapi.com (free tier with limited use)
export AIMLAPI_API_KEY="your-free-api-key-here"
echo 'export AIMLAPI_API_KEY="your-free-api-key-here"' >> ~/.bashrc

3. Test the Setup:

# Verify configuration
aiml --config

# Test basic functionality
echo "SELECT VERSION();" | aiml "explain this SQL"

Practical Usage Patterns

Quick Table Analysis

# Analyze a specific table
mysql -e "SHOW CREATE TABLE users\G" mydb | \
aiml -m claude4 "Analyze this MySQL table structure"

Compare Different Model Perspectives

# Get multiple viewpoints on the same table
TABLE_DDL=$(mysql -e "SHOW CREATE TABLE orders\G" ecommerce)

echo "$TABLE_DDL" | aiml -m claude4 "MySQL table analysis"
echo "$TABLE_DDL" | aiml -m grok3 "Performance optimization review" 
echo "$TABLE_DDL" | aiml -m gpt4 "Practical improvement suggestions"

Analyze Multiple Tables

# Quick analysis of all tables in a database
mysql -e "SHOW TABLES;" mydb | \
while read table; do
  echo "=== Analyzing $table ==="
  mysql -e "SHOW CREATE TABLE $table\G" mydb | \
  aiml -m gpt4mini "Quick assessment of this table"
done

Index Analysis

# Review index usage and optimization
mysql -e "SHOW INDEX FROM tablename;" database | \
aiml -m deepseek "Suggest index optimizations for this MySQL table"

Query Performance Analysis

# Analyze slow queries
mysql -e "SHOW PROCESSLIST;" | \
aiml -m grok3 "Identify potential performance issues in these MySQL processes"

Why AIMLAPI Makes This Possible for DBAs

Free Access with Reasonable Costs: AIMLAPI provides free access with limited use to over 100 AI models, with very reasonable pricing for additional testing. This makes it perfect for DBAs who want to experiment without committing to expensive subscriptions.

Model Diversity: Access to models from different providers (OpenAI, Anthropic, Google, Meta, etc.) means you get varied perspectives and expertise areas.

No Vendor Lock-in: You can experiment with different models to find what works best for your specific needs without long-term commitments.

Terminal-Native: Stays in your comfortable Linux environment where you're already doing your MySQL work.

Model Selection Guide

Different models excel at different aspects of MySQL analysis:

# For detailed structural analysis
aiml -m claude4 "Comprehensive table structure review"

# For performance-focused analysis  
aiml -m grok3 "Performance optimization recommendations"

# For quick, practical suggestions
aiml -m gpt4 "Immediate actionable improvements"

# For complex reasoning about trade-offs
aiml -m deepseek-r1 "Complex optimization trade-offs analysis"

# For cost-effective quick checks
aiml -m gpt4mini "Brief table assessment"

Beyond MySQL: Other CLI Examples

Since we can pipe any command output to the AI tool, here are some other useful examples:

System Administration

# Analyze system processes
ps aux | aiml "what processes are using most resources?"

# Check disk usage
df -h | aiml "analyze disk usage and suggest cleanup"

# Network connections
netstat -tuln | aiml "explain these network connections"

# System logs
tail -50 /var/log/syslog | aiml "any concerning errors in these logs?"

File and Directory Analysis

# Large files
find /var -size +100M | aiml "organize these large files by type"

# Permission issues
ls -la /etc/mysql/ | aiml "check these file permissions for security"

# Configuration review
cat /etc/mysql/my.cnf | aiml "review this MySQL configuration"

Log Analysis

# Apache logs
tail -100 /var/log/apache2/error.log | aiml "summarize these web server errors"

# Auth logs
grep "Failed password" /var/log/auth.log | aiml "analyze these failed login attempts"

The point is you can pipe almost anything to get quick analysis without leaving your terminal.

Custom System Prompts

Tailor the analysis to your specific context:

# E-commerce focus
aiml -m claude4 -s "You are analyzing tables for a high-traffic e-commerce site" \
"Review this table for scalability"

# Security focus
aiml -m grok3 -s "You are a security-focused database analyst" \
"Security assessment of this table structure"

# Legacy system focus
aiml -m gpt4 -s "You are helping migrate a legacy system to modern MySQL" \
"Modernization recommendations for this table"

Automated Reporting

# Generate a comprehensive database analysis report
DB_NAME="production_db"
REPORT_FILE="analysis_$(date +%Y%m%d).md"

echo "# Database Analysis Report for $DB_NAME" > "$REPORT_FILE"
echo "Generated on $(date)" >> "$REPORT_FILE"

for table in $(mysql -Ns -e "SHOW TABLES;" "$DB_NAME"); do
  echo "" >> "$REPORT_FILE"
  echo "## Table: $table" >> "$REPORT_FILE"
  
  mysql -e "SHOW CREATE TABLE $table\G" "$DB_NAME" | \
  aiml -m claude4 "Provide concise analysis of this MySQL table" >> "$REPORT_FILE"
done

Performance Optimization Workflow

# Comprehensive performance analysis
mysql -e "SHOW CREATE TABLE heavy_table\G" db | \
aiml -m grok3 "Performance bottleneck analysis"

# Follow up with index suggestions
mysql -e "SHOW INDEX FROM heavy_table;" db | \
aiml -m deepseek "Index optimization strategy"

# Get implementation plan
aiml -m gpt4 "Create step-by-step implementation plan for these optimizations"

Real Benefits of This Approach

Speed: Get expert-level analysis in seconds instead of hours
Multiple Perspectives: Different models catch different issues
Learning Tool: Each analysis teaches you something new about MySQL optimization
Cost-Effective: Thanks to AIMLAPI's free tier and reasonable pricing, this powerful analysis is accessible
Consistency: Repeatable analysis across different tables and databases
Documentation: Easy to generate reports and share findings with teams

Tips for Best Results

  1. Start with Structure: Always begin with SHOW CREATE TABLE for comprehensive analysis
  2. Use Specific Prompts: The more specific your request, the better the analysis
  3. Compare Models: Different models excel at different aspects - use multiple perspectives
  4. Validate Suggestions: Always test AI recommendations in development environments first
  5. Iterate: Use follow-up questions to dive deeper into specific recommendations

Getting Started Today

The beauty of this approach is its simplicity and cost-effectiveness. With just a few commands, you can:

  1. Get your free AIMLAPI key from https://aimlapi.com (includes free tier)
  2. Install the script (5 minutes)
  3. Start analyzing your MySQL tables immediately
  4. Experiment with different models to see which ones work best for your needs
  5. Use the free tier for regular analysis, pay only for heavy testing

Windows Users (Quick Option)

I'm not a Windows person, but if you need to run this on Windows, the simplest approach is:

  1. Install WSL2 (Windows Subsystem for Linux)
  2. Install Ubuntu from Microsoft Store
  3. Follow the Linux setup above inside WSL2

This gives you a proper Linux environment where the script will work exactly as designed.

This isn't about replacing DBA expertise - it's about augmenting it while staying in your terminal environment. The AI provides rapid analysis and catches things you might miss, while you provide the context and make the final decisions.

Whether you're working with a single table or a complex database with hundreds of tables, this workflow scales to meet your needs. And since AIMLAPI provides free access with reasonable costs for additional use, you can experiment and find the perfect combination for your specific use cases without budget concerns.


The combination of MySQL's powerful introspection capabilities with AI analysis creates a workflow that's both practical and cost-effective for DBAs. Give it a try on your next database optimization project - you might be surprised at what insights emerge, all while staying in your comfortable terminal environment.

Tuesday, April 15, 2025

HOMELAB

Just for fun...

Numerous options exist for testing MySQL and general database instances.

This is just an example of how you can use Proxmox to have a simple fast setup for repeatable testing and access.

This example is a nice and cheap option to have a home lab for MySQL and anything else you want.

The homelab virtualization I picked was Proxmox and Openmediavaul for extended NFS storage (not 100% sold on Openmediavault, Debian alone could do it)

Hardware I picked a simple and compact setup:

This virtualization also allows you to have an environment that can run as needed, shutdown, or even walk away and come back to whenever wanted. 

Allows you to also clone each instance for additional instances and testing if desired once set up. 

This is going to be a very simple direct setup across all of these.

You can create block devices for each data directory or NFS if you prefer as well but this will all be local direct installs for demo.

Goal to test and demo:

  • Install
  • Monitoring
  • Vault Password rotation

Debian BASE 

Set up a Debian 12 instance with 4GB and 4 CPUs.
Then I converted this as a template so I can link all other instances from this.
This makes it very fast to set up other instances as well as have the same base to start with.


┌──(root㉿debian12-server)-[~]
└─# uname -a
Linux debian12-server 6.1.0-32-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.129-1 (2025-03-06) x86_64 GNU/Linux

apt install unzip
cd /usr/local/src/
wget https://releases.hashicorp.com/vault/1.4.2/vault_1.4.2_linux_amd64.zip
unzip vault_1.4.2_linux_amd64.zip
mv vault /usr/bin/
setcap cap_ipc_lock=+ep /usr/bin/vault
# vault -v
Vault v1.4.2

MariaDB 11 Rolling 

vi /etc/network/interfaces
auto ens18
iface ens18 inet static
    address 192.168.3.100
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

# hostname
mariadb1.sqlhjalp.com

sudo apt-get install apt-transport-https curl
sudo mkdir -p /etc/apt/keyrings
sudo curl -o /etc/apt/keyrings/mariadb-keyring.pgp 'https://mariadb.org/mariadb_release_signing_key.pgp'
vi /etc/apt/sources.list.d/mariadb.sources

┌──(root㉿mariadb1)-[~]
└─# cat /etc/apt/sources.list.d/mariadb.sources
# MariaDB 11 Rolling repository list - created 2025-04-01 15:13 UTC
# https://mariadb.org/download/
X-Repolib-Name: MariaDB
Types: deb
# deb.mariadb.org is a dynamic mirror if your preferred mirror goes offline. See https://mariadb.org/mirrorbits/ for details.
# URIs: https://deb.mariadb.org/11/debian
URIs: https://mirror.its.dal.ca/mariadb/repo/11.rolling/debian
Suites: bookworm
Components: main
Signed-By: /etc/apt/keyrings/mariadb-keyring.pgp

┌──(root㉿mariadb1)-[~]
└─# apt-get update

┌──(root㉿mariadb1)-[~]
└─# apt-get install mariadb-server -y

┌──(root㉿mariadb1)-[~]
└─# mariadb
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 32
Server version: 11.7.2-MariaDB-deb12 mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>
MariaDB [(none)]> exit
Bye

┌──(root㉿mariadb1)-[~]
└─# systemctl stop mariadb.service

Just to be clear. Yes with a proxmox linked server you can restart and it keeps all your values and setup

┌──(root㉿mariadb1)-[~]
└─# uptime
 10:27:29 up 1 min,  2 users,  load average: 0.15, 0.11, 0.04

┌──(root㉿mariadb1)-[~]
└─# mariadb
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MariaDB connection id is 33
Server version: 11.7.2-MariaDB-deb12 mariadb.org binary distribution

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> status
--------------
mariadb from 11.7.2-MariaDB, client 15.2 for debian-linux-gnu (x86_64) using  EditLine wrapper

Connection id:		33
Current database:
Current user:		root@localhost
SSL:			Cipher in use is TLS_AES_256_GCM_SHA384, cert is OK
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server:			MariaDB
Server version:		11.7.2-MariaDB-deb12 mariadb.org binary distribution
Protocol version:	10
Connection:		Localhost via UNIX socket
Server characterset:	utf8mb4
Db     characterset:	utf8mb4
Client characterset:	utf8mb3
Conn.  characterset:	utf8mb3
UNIX socket:		/run/mysqld/mysqld.sock
Uptime:			1 min 45 sec

Threads: 1  Questions: 61  Slow queries: 0  Opens: 33  Open tables: 26  Queries per second avg: 0.580
--------------

MySQL Innovation 

vi /etc/network/interfaces
auto ens18
iface ens18 inet static
    address 192.168.3.101
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4


# apt install gnupg -y
# cd /usr/local/src/
# wget https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.deb
# dpkg -i mysql-apt-config_0.8.33-1_all.deb

│ Which server version do you wish to receive?                                                                                                                           │
  mysql-8.0  
  mysql-innovation    <-- Picked this 
  mysql-8.4-lts    
  mysql-cluster-8.0   
  mysql-cluster-innovation    
  mysql-cluster-8.4-lts    
  None

 Which MySQL product do you wish to configure?                                                                                                                           
 MySQL Server & Cluster (Currently selected: mysql-innovation) 
 MySQL Connectors (Currently selected: Enabled) 
 Ok

# apt-get update
# apt-get install mysql-server -y 

─# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 9.2.0 MySQL Community Server - GPL

Copyright (c) 2000, 2025, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> status
--------------
mysql  Ver 9.2.0 for Linux on x86_64 (MySQL Community Server - GPL)

Connection id:		9
Current database:
Current user:		root@localhost
SSL:			Not in use
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		9.2.0 MySQL Community Server - GPL
Protocol version:	10
Connection:		Localhost via UNIX socket
Server characterset:	utf8mb4
Db     characterset:	utf8mb4
Client characterset:	utf8mb4
Conn.  characterset:	utf8mb4
UNIX socket:		/var/run/mysqld/mysqld.sock
Binary data as:		Hexadecimal
Uptime:			47 sec

Threads: 2  Questions: 6  Slow queries: 0  Opens: 119  Flush tables: 3  Open tables: 38  Queries per second avg: 0.127

MySQL Innovation NDB CLuster 

cat /etc/network/interfaces
auto ens18
iface ens18 inet static
    address 192.168.3.102
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

auto ens18
iface ens18 inet static
    address 192.168.3.103
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

auto ens18
iface ens18 inet static
    address 192.168.3.103
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

# apt install gnupg -y
# cd /usr/local/src/
# wget https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.deb
# dpkg -i mysql-apt-config_0.8.33-1_all.deb

│ Which server version do you wish to receive?                                                                                                                            
 mysql-8.0  
 mysql-innovation     
 mysql-8.4-lts        
 mysql-cluster-8.0       
 mysql-cluster-innovation   <-- Picked this    
 mysql-cluster-8.4-lts   
 None

 Which MySQL product do you wish to configure?                                                                             
  MySQL Server & Cluster (Currently selected: mysql-cluster-innovation)  
  MySQL Connectors (Currently selected: Enabled)     
  Ok

# apt-get update
# apt-get install mysql-cluster-community-server -y 
# apt-get install mysql-cluster-community-management-server   <-- we can pick and choose later which to use
# apt-get install mysql-cluster-community-data-node  -y 

# vi /etc/mysql/conf.d/mysql.cnf

[mysqld]
# Options for mysqld process:
ndbcluster                      # run NDB storage engine

[mysql_cluster]
# Options for NDB Cluster processes:
ndb-connectstring=192.168.3.102  # location of management server

# mkdir /var/lib/mysql-cluster
# cd /var/lib/mysql-cluster
# vi config.ini
[ndbd default]
# Options affecting ndbd processes on all data nodes:
NoOfReplicas=2    # Number of fragment replicas
DataMemory=98M    # How much memory to allocate for data storage

[ndb_mgmd]
# Management process options:
HostName=192.168.3.102          # Hostname or IP address of management node
DataDir=/var/lib/mysql-cluster  # Directory for management node log files

[ndbd]
# Options for data node "A":
                                # (one [ndbd] section per data node)
HostName=192.168.3.103          # Hostname or IP address
NodeId=2                        # Node ID for this data node
DataDir=/var/lib/mysql/data   # Directory for this data node's data files

[ndbd]
# Options for data node "B":
HostName=192.168.3.104          # Hostname or IP address
NodeId=3                        # Node ID for this data node
DataDir=/var/lib/mysql/data   # Directory for this data node's data files

[mysqld]
# SQL node options:
HostName=192.168.3.102          # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)


┌──(root㉿ndb1)-[/var/lib/mysql-cluster]
└─# ndb_mgmd --initial -f /var/lib/mysql-cluster/config.ini
MySQL Cluster Management Server mysql-9.2.0 ndb-9.2.0
WARNING: --ndb-connectstring is ignored when mgmd is started with -f or config-file.

┌──(root㉿ndb2)-[/var/lib/mysql-cluster]
└─# ndbd
2025-04-01 11:53:11 [ndbd] INFO     -- Angel connected to '192.168.3.102:1186'
2025-04-01 11:53:12 [ndbd] INFO     -- Angel allocated nodeid: 2

┌──(root㉿ndb3)-[/var/lib/mysql-cluster]
└─# ndbd
2025-04-01 11:53:19 [ndbd] INFO     -- Angel connected to '192.168.3.102:1186'
2025-04-01 11:53:20 [ndbd] INFO     -- Angel allocated nodeid: 3


┌──(root㉿ndb1)-[/var/lib/mysql-cluster]
└─# ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to management server at 192.168.3.102 port 1186 (using cleartext)
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2	@192.168.3.103  (mysql-9.2.0 ndb-9.2.0, Nodegroup: 0, *)
id=3	@192.168.3.104  (mysql-9.2.0 ndb-9.2.0, Nodegroup: 0)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@192.168.3.102  (mysql-9.2.0 ndb-9.2.0)

[mysqld(API)]	1 node(s)
id=4 (not connected, accepting connect from 192.168.3.102)

┌──(root㉿ndb1)-[/var/lib/mysql-cluster]
└─# mysql -u root -p -e "select @@hostname"
Enter password:
+-------------------+
| @@hostname        |
+-------------------+
| ndb1.sqlhjalp.com |
+-------------------+

┌──(root㉿ndb2)-[/var/lib/mysql-cluster]
└─# mysql -u root -p -e "select @@hostname"
Enter password:
+-------------------+
| @@hostname        |
+-------------------+
| ndb2.sqlhjalp.com |
+-------------------+

┌──(root㉿ndb3)-[/var/lib/mysql-cluster]
└─# mysql -u root -p -e "select @@hostname"
Enter password:
+-------------------+
| @@hostname        |
+-------------------+
| ndb3.sqlhjalp.com |
+-------------------+

Percona Server 

# cat /etc/network/interfaces
 
auto ens18
iface ens18 inet static
    address 192.168.3.105
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

# apt install curl  gnupg gnupg2 lsb-release -y
# cd /usr/local/src/
# curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb
# dpkg -i  percona-release_latest.generic_all.deb
# percona-release enable-only ps-84-lts release
# percona-release enable tools release
# apt install percona-server-server

┌──(root㉿ps1)-[/usr/local/src]
└─# ps -ef |grep mysql
mysql       5832       1  0 11:58 ?        00:00:03 /usr/sbin/mysqld
root        5924     501  0 12:04 pts/1    00:00:00 grep --color=auto mysql

┌──(root㉿ps1)-[/usr/local/src]
└─# mysql -u root -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 9
Server version: 8.4.4-4 Percona Server (GPL), Release '4', Revision '844fde07'

Copyright (c) 2009-2025 Percona LLC and/or its affiliates
Copyright (c) 2000, 2025, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

Percona Cluster 

# cat /etc/network/interfaces
iface ens18 inet static
    address 192.168.3.106
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

iface ens18 inet static
    address 192.168.3.106
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

iface ens18 inet static
    address 192.168.3.106
    netmask 255.255.255.0
    gateway 192.168.3.1
    dns-nameservers 8.8.8.8 8.8.4.4

# apt install curl  gnupg gnupg2 lsb-release -y
# cd /usr/local/src/
# curl -O https://repo.percona.com/apt/percona-release_latest.generic_all.deb
# dpkg -i  percona-release_latest.generic_all.deb
# apt update
# percona-release setup pxc80
# apt install -y percona-xtradb-cluster

 
cat /etc/my.cnf
[client]
socket=/var/run/mysqld/mysqld.sock

[xtrabackup]
open-files-limit		= 1000000

[mysqld]
server-id=1
datadir=/var/lib/mysql
socket=/var/run/mysqld/mysqld.sock
log-error=/var/log/mysql/error.log
pid-file=/var/run/mysqld/mysqld.pid
secure-log-path=/var/lib/mysql-files/
# Binary log expiration period is 604800 seconds, which equals 7 days
binlog_expire_logs_seconds=604800

userstat                        = 1

 └─# cat /etc/my.cnf | grep encrypt
pxc-encrypt-cluster-traffic     = OFF
 
┌──(root㉿pxc1)-[/]
└─# cat /etc/my.cnf | grep address
wsrep_cluster_address           = gcomm://192.168.3.106,192.168.3.107,192.168.3.108
# Node IP address
wsrep_node_address=192.168.3.106

┌──(root㉿pxc1)-[/etc/mysql/conf.d]
└─# systemctl start mysql@bootstrap

mysql> show status like 'wsrep_c%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_cert_deps_distance   | 0                                    |
| wsrep_commit_oooe          | 0                                    |
| wsrep_commit_oool          | 0                                    |
| wsrep_commit_window        | 0                                    |
| wsrep_cert_index_size      | 0                                    |
| wsrep_cert_bucket_count    | 1                                    |
| wsrep_causal_reads         | 0                                    |
| wsrep_cert_interval        | 0                                    |
| wsrep_cluster_weight       | 1                                    |
| wsrep_cluster_capabilities |                                      |
| wsrep_cluster_conf_id      | 1                                    |
| wsrep_cluster_size         | 1                                    |
| wsrep_cluster_state_uuid   | 71a6ebf4-0f20-11f0-b4eb-0a0f463a7185 |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
+----------------------------+--------------------------------------+
15 rows in set (0.00 sec)

┌──(root㉿pxc2)-[/etc]
└─# cat /etc/my.cnf | grep address
wsrep_cluster_address           = gcomm://192.168.3.106,192.168.3.107,192.168.3.108
# Node IP address
wsrep_node_address=192.168.3.107

┌──(root㉿pxc2)-[/var/lib/mysql]
└─# rm -Rf *

┌──(root㉿pxc2)-[/var/lib/mysql]
└─# ls -tla
total 8
drwxr-x---  2 mysql mysql 4096 Apr  1 13:36 .
drwxr-xr-x 26 root  root  4096 Apr  1 12:34 ..

┌──(root㉿pxc2)-[/var/lib/mysql]
└─# systemctl start mysql

┌──(root㉿pxc2)-[/etc]
└─# cat /etc/my.cnf | grep address
wsrep_cluster_address           = gcomm://192.168.3.106,192.168.3.107,192.168.3.108
# Node IP address
wsrep_node_address=192.168.3.107
 ┌──(root㉿pxc3)-[/var/lib/mysql]
└─# rm -Rf *

┌──(root㉿pxc3)-[/var/lib/mysql]
└─# systemctl start mysql


mysql> show status like 'wsrep_c%';
+----------------------------+--------------------------------------+
| Variable_name              | Value                                |
+----------------------------+--------------------------------------+
| wsrep_cert_deps_distance   | 0                                    |
| wsrep_commit_oooe          | 0                                    |
| wsrep_commit_oool          | 0                                    |
| wsrep_commit_window        | 0                                    |
| wsrep_cert_index_size      | 0                                    |
| wsrep_cert_bucket_count    | 1                                    |
| wsrep_causal_reads         | 0                                    |
| wsrep_cert_interval        | 0                                    |
| wsrep_cluster_weight       | 3                                    |
| wsrep_cluster_capabilities |                                      |
| wsrep_cluster_conf_id      | 3                                    |
| wsrep_cluster_size         | 3                                    |
| wsrep_cluster_state_uuid   | 71a6ebf4-0f20-11f0-b4eb-0a0f463a7185 |
| wsrep_cluster_status       | Primary                              |
| wsrep_connected            | ON                                   |
+----------------------------+--------------------------------------+
15 rows in set (0.00 sec)

Setup MySQL Exporters and Prometheus 

per each machine...

apt install -y prometheus-mysqld-exporter
 
CREATE USER IF NOT EXISTS 'prometheus'@'localhost' IDENTIFIED BY '<PASSWORDHERE>';
mysql> show grants for 'prometheus'@'localhost'; +------------------------------------------------------------------------------+ | Grants for prometheus@localhost | +------------------------------------------------------------------------------+ | GRANT SELECT, PROCESS, REPLICATION CLIENT ON *.* TO `prometheus`@`localhost` | +------------------------------------------------------------------------------+ 1 row in set (0.00 sec) ┌──(root㉿pxc1)-[~] └─# ls -ltr /etc/mysql/debian.cnf -rw-r--r-- 1 root root 50 Apr 1 16:52 /etc/mysql/debian.cnf # vi /etc/default/prometheus-mysqld-exporter systemctl restart prometheus-mysqld-exporter.service # systemctl restart prometheus-mysqld-exporter.service

Prometheus Server 

root@prometheus:/etc/prometheus# vi prometheus.yml


  - job_name: mysqld-exporter
    static_configs:
      - targets:
          - 'mysql1.sqlhjalp.com:9104'
          - 'mariadb1.sqlhjalp.com:9104'
          - 'ps1.sqlhjalp.com:9104'
          - 'pxc1.sqlhjalp.com:9104'
          - 'pxc2.sqlhjalp.com:9104'
          - 'pxc3.sqlhjalp.com:9104'
          - 'ndb1.sqlhjalp.com:9104'
          - 'ndb2.sqlhjalp.com:9104'
          - 'ndb3.sqlhjalp.com:9104'
        labels:
          country: US
          db_env: 'home'
          environment: "demo"
          linux: debian
          nodeuse: server


mysql_up{environment="demo"}

Element	Value
mysql_up{country="US",db_env="home",environment="demo",instance="mariadb1.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="mysql1.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="ndb1.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="ndb2.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="ndb3.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="ps1.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="pxc1.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="pxc2.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1
mysql_up{country="US",db_env="home",environment="demo",instance="pxc3.sqlhjalp.com:9104",job="mysqld-exporter",linux="debian",nodeuse="server"}	1

Vault Account SETUP 

mysql> CREATE ROLE IF NOT EXISTS vaultaccess;
mysql>  GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD,   PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE, CREATE ROLE, DROP ROLE ON *.* TO `vaultaccess`@`%` WITH GRANT OPTION;
 
CREATE USER `vaultadmin`@`%` IDENTIFIED BY '<PASSWORDHERE>' DEFAULT ROLE `vaultaccess`@`%` REQUIRE NONE PASSWORD EXPIRE DEFAULT ACCOUNT UNLOCK PASSWORD HISTORY DEFAULT PASSWORD REUSE INTERVAL DEFAULT PASSWORD REQUIRE CURRENT DEFAULT ;
mysql> show grants for vaultadmin;

CREATE ROLE  IF NOT EXISTS  READONLY;
GRANT SELECT,  EXECUTE  ON *.* TO `READONLY`@`%` ;

Mariadb

MariaDB [(none)]> CREATE USER `vaultadmin`@`%` IDENTIFIED BY '<PASSWORDHERE>';
MariaDB [(none)]> GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, RELOAD, PROCESS, FILE, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE USER, EVENT, TRIGGER, CREATE TABLESPACE ON *.* TO `vaultadmin`@`%` WITH GRANT OPTION;

Vault Database engine - repeated per db instance 

vault write database/config/MYSQL plugin_name=mysql-database-plugin connection_url="{{username}}:{{password}}@tcp(mysql1.sqlhjalp.com:3306)/" allowed_roles="my-role" username="vaultadmin" password="<PASSWORDHERE>"
vault read database/config/MYSQL Key Value --- ----- allowed_roles [] connection_details map[backend:database connection_url:{{username}}:{{password}}@tcp(mysql1.sqlhjalp.com:3306)/ max_connection_lifetime:0s max_idle_connections:0 max_open_connections:4 username:vaultadmin] disable_automated_rotation false password_policy n/a plugin_name mysql-database-plugin plugin_version n/a root_credentials_rotate_statements [] rotation_period 0s rotation_schedule n/a rotation_window 0 skip_static_role_import_rotation false verify_connection true vault read database/roles/DEMOREADONLY Key Value --- ----- creation_statements [CREATE USER '{{name}}'@'%' IDENTIFIED BY '{{password}}'; GRANT SELECT ON *.* TO '{{name}}'@'%';] credential_type password db_name MYSQL default_ttl 1h max_ttl 24h renew_statements [] revocation_statements [DROP USER IF EXISTS '{{name}}'@'%';] rollback_statements []

Vault dynamic user example 

vault read database/creds/DEMOREADONLY
Key                Value
---                -----
lease_id           database/creds/DEMOREADONLY/1SACMdnTGXseMewbA6ek1T42
lease_duration     1h
lease_renewable    true
password           -piWu8YfOFxUkAqR347a
username           v-userpass-k-DEMOREADON-HFRYaNGE


mysql> show grants for 'v-userpass-k-DEMOREADON-HFRYaNGE'@'%';
+---------------------------------------------------------------+
| Grants for v-userpass-k-DEMOREADON-HFRYaNGE@%                 |
+---------------------------------------------------------------+
| GRANT SELECT ON *.* TO `v-userpass-k-DEMOREADON-HFRYaNGE`@`%` |
+---------------------------------------------------------------+
1 row in set (0.00 sec)

In addition..... 

POSTGRESQL

https://docs.vultr.com/how-to-install-postgresql-on-debian-12

# apt install -y postgresql-common
# /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh
# apt update
# apt-cache  policy postgresql
# apt install postgresql -y
# systemctl start postgresql
# systemctl status postgresql
# sudo -u postgres psqlsudo -u postgres psql
# postgres=# ALTER ROLE postgres WITH ENCRYPTED PASSWORD '<password>';
ALTER ROLE
# 

Oracle Database XE 


wget https://download.oracle.com/otn-pub/otn_software/db-express/oracle-database-xe-21c-1.0-1.ol8.x86_64.rpm
wget https://yum.oracle.com/repo/OracleLinux/OL8/appstream/x86_64/getPackage/oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm

# yum install ./oracle-database-xe-21c-1.0-1.ol8.x86_64.rpm  ./oracle-database-preinstall-21c-1.0-1.el8.x86_64.rpm

# /etc/init.d/oracle-xe-21c configure

$ export ORACLE_SID=XE 
$ export ORAENV_ASK=NO 
$ . /opt/oracle/product/21c/dbhomeXE/bin/oraenv

ORACLE_HOME = [] ? /opt/oracle/product/21c/dbhomeXE
The Oracle base has been set to /opt/oracle


[root@localhost ~]# echo $ORACLE_HOME
/opt/oracle/product/21c/dbhomeXE
[root@localhost ~]# cd  $ORACLE_HOME
[root@localhost dbhomeXE]# pwd
/opt/oracle/product/21c/dbhomeXE

[root@localhost dbhomeXE]# cd bin 

[root@localhost bin]# sqlplus /nolog

SQL*Plus: Release 21.0.0.0.0 - Production on Thu Apr 3 10:35:15 2025
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

SQL>

SQL> CONNECT SYS AS SYSDBA
Enter password:
Connected.
SQL>

SQL> set linesize 1500
SQL> select username, account_status from DBA_USERS;

USERNAME															 ACCOUNT_STATUS
-------------------------------------------------------------------------------------------------------------------------------- --------------------------------
SYS																 OPEN
SYSTEM																 OPEN
XS$NULL 															 LOCKED
OJVMSYS 															 LOCKED
LBACSYS 															 LOCKED
OUTLN																 LOCKED
DBSNMP																 LOCKED
APPQOSSYS															 LOCKED
DBSFWUSER															 LOCKED
GGSYS																 LOCKED
ANONYMOUS															 LOCKED
 
SQL> QUIT
Disconnected from Oracle Database 21c Express Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQLSERVER 

# curl -o /etc/yum.repos.d/mssql-server.repo https://packages.microsoft.com/config/rhel/8/mssql-server-2022.repo
# yum install -y mssql-server
# /opt/mssql/bin/mssql-conf setup
# systemctl status mssql-server

curl https://packages.microsoft.com/config/rhel/8/prod.repo | sudo tee /etc/yum.repos.d/mssql-release.repo

# yum install -y mssql-tools18 unixODBC-devel

# yum check-update
# yum update mssql-tools18

echo 'export PATH="$PATH:/opt/mssql-tools18/bin"' >> ~/.bash_profile
source ~/.bash_profile

# /opt/mssql/bin/mssql-conf set-sa-password
Enter the SQL Server system administrator password:
Confirm the SQL Server system administrator password:
Configuring SQL Server...

# sqlcmd -S localhost -No  -U sa

# sqlcmd -S localhost -No  -U sa
Password:
1> CREATE DATABASE TestDB;
2> SELECT Name FROM sys.databases;
3> GO
Name
--------------------------------------------------------------------------------------------------------------------------------
master
tempdb
model
msdb
TestDB

(5 rows affected)


USE TestDB;

CREATE TABLE dbo.Inventory
(
    id INT, name NVARCHAR (50),  quantity INT, PRIMARY KEY (id)
);
GOSH

INSERT INTO dbo.Inventory VALUES (1, 'banana', 150);
INSERT INTO dbo.Inventory VALUES (2, 'orange', 154);

GO


SELECT * FROM dbo.Inventory WHERE quantity > 152;
GO

1> SELECT * FROM dbo.Inventory;
2> GO
id          name                                               quantity
----------- -------------------------------------------------- -----------
          1 banana                                                     150
          2 orange                                                     154

MONGODB - PERCONA

https://docs.percona.com/percona-server-for-mongodb/8.0/install/apt.html

# apt install -y gnupg2 gnupg curl 
# wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
# dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
# percona-release --help | grep psmdb
psmdb36 psmdb40 psmdb42 psmdb44 psmdb60 psmdb50 psmdb70 psmdb80 psmdb40
psmdb60pro psmdb70pro
psmdb-70-pro psmdb-60-pro
psmdb-36 psmdb-40 psmdb-42 psmdb-44 psmdb-60 psmdb-50 psmdb-70 psmdb-80 psmdb40
psmdb-60-pro psmdb-70-pro
psmdb-70-pro psmdb-60-pro

# percona-release enable psmdb-80 release
# apt update
# apt install percona-server-mongodb
# apt-cache madison percona-server-mongodb
# ls -ltr /etc/mongod.conf
-rw-r--r-- 1 root root 1403 Feb 11 23:56 /etc/mongod.conf

# vi /etc/systemd/system/enable-transparent-huge-pages.service
# cat  /etc/systemd/system/enable-transparent-huge-pages.service
[Unit]
Description=Enable Transparent Hugepages (THP)
DefaultDependencies=no
After=sysinit.target local-fs.target
Before=mongod.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c 'echo always | tee /sys/kernel/mm/transparent_hugepage/enabled > /dev/null && echo defer+madvise | tee /sys/kernel/mm/transparent_hugepage/defrag > /dev/null && echo 0 | tee /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none > /dev/null && echo 1 | tee /proc/sys/vm/overcommit_memory > /dev/null'

[Install]
WantedBy=basic.target

# systemctl daemon-reload
# systemctl start enable-transparent-huge-pages
# cat /sys/kernel/mm/transparent_hugepage/enabled && cat /sys/kernel/mm/transparent_hugepage/defrag && cat /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none && cat /proc/sys/vm/overcommit_memory
[always] madvise never
always defer [defer+madvise] madvise never
0
1

# systemctl enable enable-transparent-huge-pages
# ls -lr /var/lib/mongodb/
total 0


# systemctl start mongod
# systemctl status mongod
# mongosh
Current Mongosh Log ID:	67ed4eb0cd874b942d98ebcf
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2
MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017

MONGODB

https://www.mongodb.com/docs/manual/tutorial/install-mongodb-on-debian/

 
# apt-get install gnupg curl
# curl -fsSL https://www.mongodb.org/static/pgp/server-8.0.asc | \
   sudo gpg -o /usr/share/keyrings/mongodb-server-8.0.gpg \
  --dearmor
# echo "deb [ signed-by=/usr/share/keyrings/mongodb-server-8.0.gpg ] http://repo.mongodb.org/apt/debian bookworm/mongodb-org/8.0 main" | sudo tee /etc/apt/sources.list.d/mongodb-org-8.0.list 
# apt-get update
# apt-get install -y mongodb-org
# systemctl daemon-reload
# ulimit -c unlimited
# ulimit -n 64000
# ulimit -f unlimited
# ulimit -t unlimited
# ulimit -l unlimited
# ulimit -m unlimited
# ulimit -u 64000
# ulimit -a
real-time non-blocking time  (microseconds, -R) unlimited
core file size              (blocks, -c) unlimited
data seg size               (kbytes, -d) unlimited
scheduling priority                 (-e) 0
file size                   (blocks, -f) unlimited
pending signals                     (-i) 15471
max locked memory           (kbytes, -l) unlimited
max memory size             (kbytes, -m) unlimited
open files                          (-n) 64000
pipe size                (512 bytes, -p) 8
POSIX message queues         (bytes, -q) 819200
real-time priority                  (-r) 0
stack size                  (kbytes, -s) 8192
cpu time                   (seconds, -t) unlimited
max user processes                  (-u) 64000
virtual memory              (kbytes, -v) unlimited
file locks                          (-x) unlimited

# systemctl start mongod

# wget https://repo.percona.com/apt/percona-release_latest.$(lsb_release -sc)_all.deb
# dpkg -i percona-release_latest.$(lsb_release -sc)_all.deb
# percona-release --help | grep psmdb
psmdb36 psmdb40 psmdb42 psmdb44 psmdb60 psmdb50 psmdb70 psmdb80 psmdb40
psmdb60pro psmdb70pro
psmdb-70-pro psmdb-60-pro
psmdb-36 psmdb-40 psmdb-42 psmdb-44 psmdb-60 psmdb-50 psmdb-70 psmdb-80 psmdb40
psmdb-60-pro psmdb-70-pro
psmdb-70-pro psmdb-60-pro

# percona-release enable psmdb-80 release
# apt update
# apt install percona-server-mongodb
# apt-cache madison percona-server-mongodb
# ls -ltr /etc/mongod.conf
-rw-r--r-- 1 root root 1403 Feb 11 23:56 /etc/mongod.conf

# vi /etc/systemd/system/enable-transparent-huge-pages.service
# cat  /etc/systemd/system/enable-transparent-huge-pages.service
[Unit]
Description=Enable Transparent Hugepages (THP)
DefaultDependencies=no
After=sysinit.target local-fs.target
Before=mongod.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c 'echo always | tee /sys/kernel/mm/transparent_hugepage/enabled > /dev/null && echo defer+madvise | tee /sys/kernel/mm/transparent_hugepage/defrag > /dev/null && echo 0 | tee /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none > /dev/null && echo 1 | tee /proc/sys/vm/overcommit_memory > /dev/null'

[Install]
WantedBy=basic.target

# systemctl daemon-reload
# systemctl start enable-transparent-huge-pages
# cat /sys/kernel/mm/transparent_hugepage/enabled && cat /sys/kernel/mm/transparent_hugepage/defrag && cat /sys/kernel/mm/transparent_hugepage/khugepaged/max_ptes_none && cat /proc/sys/vm/overcommit_memory
[always] madvise never
always defer [defer+madvise] madvise never
0
1

# systemctl enable enable-transparent-huge-pages
# ls -lr /var/lib/mongodb/
total 0


# systemctl start mongod
# systemctl status mongod
# mongosh
Current Mongosh Log ID:	67ed4eb0cd874b942d98ebcf
Connecting to:		mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.3.2
MongoNetworkError: connect ECONNREFUSED 127.0.0.1:27017


CASSANDRA

https://docs.vultr.com/how-to-install-apache-cassandra-on-debian-12

# apt update
# cat  /etc/apt/source.list
deb https://deb.debian.org/debian bookworm main non-free-firmware
deb http://deb.debian.org/debian bookworm-updates main
deb http://deb.debian.org/debian-security bookworm-security main
deb http://deb.debian.org/debian unstable main non-free contrib 

# apt update
# apt install curl
# apt install openjdk-17-jdk
# java --version
openjdk 17.0.14 2025-01-21
OpenJDK Runtime Environment (build 17.0.14+7-Debian-1deb12u1)
OpenJDK 64-Bit Server VM (build 17.0.14+7-Debian-1deb12u1, mixed mode, sharing)

# echo "deb [signed-by=/etc/apt/keyrings/apache-cassandra.asc] https://debian.cassandra.apache.org 41x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
# curl -o /etc/apt/keyrings/apache-cassandra.asc https://downloads.apache.org/cassandra/KEYS
# apt update
# apt install cassandra

# ls -lr /etc/cassandra/cassandra.yaml
-rw-r--r-- 1 root root 91468 Jan 27 07:28 /etc/cassandra/cassandra.yaml

# vi /etc/cassandra/cassandra.yaml
# systemctl restart cassandra

# ls -lr /var/log/cassandra/
total 0

# systemctl status cassandra
# nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused'.

# cqlsh -u cassandra -p cassandra

Warning: Using a password on the command line interface can be insecure.
Recommendation: use the credentials file to securely provide the password.

Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})