Skip to content

Utilities and Tools

CLI Development

My recommended CLI development tools:

Click

Why I recommend it:

  • Beautiful command line interfaces
  • Intuitive and composable API
  • Extensive documentation
  • From the creators of Flask
  • Great for both simple and complex CLIs

Key Features:

  • 🌈 Colorful output support
  • 📝 Automatic help page generation
  • 🔗 Command nesting
  • 🔍 Argument validation
  • 🔧 Easy configuration

Quick Example:

import click

@click.command()
@click.option('--count', default=1, help='Number of greetings.')
@click.option('--name', prompt='Your name', help='The person to greet.')
def hello(count, name):
    """Simple program that greets NAME for a total of COUNT times."""
    for x in range(count):
        click.echo(f'Hello {name}!')

if __name__ == '__main__':
    hello()

Rich

Why I recommend it:

  • Beautiful terminal formatting
  • Extensive styling capabilities
  • Great for data presentation
  • Makes CLI apps look professional
  • Easy to use yet powerful

Key Features:

  • 🌈 24-bit color and styles
  • 📈 Tables and progress bars
  • 💻 Syntax highlighting
  • 🎬 Live displays
  • 📊 Markdown/markup rendering

Quick Example:

from rich import print
from rich.table import Table

table = Table(title="My Tools")
table.add_column("Name", style="cyan")
table.add_column("Version", style="magenta")
table.add_row("Python", "3.12.0")
table.add_row("Rich", "13.7.0")
print(table)

TQDM

Why I recommend it:

  • Smart progress bars
  • Works with iterables
  • Minimal code changes needed
  • Great for long-running operations

Key Features:

  • 📊 Progress monitoring
  • ⏱️ ETA calculation
  • 📈 Automatic update rate
  • 🔄 Nested progress bars

Quick Example:

from tqdm import tqdm
import time

for i in tqdm(range(100)):
    time.sleep(0.1)  # Simulate work

HTTP Clients

My recommended HTTP clients:

Requests

Why I recommend it:

  • Elegant and simple HTTP library
  • Industry standard for HTTP in Python
  • Human-friendly API
  • Extensive features

Key Features:

  • 🌈 Beautiful API design
  • 🔒 SSL/TLS verification
  • 🌎 Session support
  • 💪 Strong authentication

Quick Example:

import requests

# Simple GET request
response = requests.get('https://api.github.com')
data = response.json()

# POST with authentication
response = requests.post(
    'https://api.example.com/data',
    json={'key': 'value'},
    headers={'Authorization': 'Bearer token'}
)

Web Scraping

My recommended web scraping tools:

BeautifulSoup

Why I recommend it:

  • Easy HTML/XML parsing
  • Forgiving parser
  • Intuitive API
  • Great documentation

Key Features:

  • 🔍 Easy navigation
  • 🛠️ Multiple parsers
  • 🔧 Search by tags/attributes
  • 📚 HTML cleanup

Quick Example:

from bs4 import BeautifulSoup
import requests

# Parse HTML
html = requests.get('https://example.com').text
soup = BeautifulSoup(html, 'html.parser')

# Find elements
title = soup.find('h1').text
links = [a['href'] for a in soup.find_all('a')]

Scrapy

Why I recommend it:

  • Production-grade web scraping
  • Asynchronous architecture
  • Built-in pipeline system
  • Extensive middleware support

Key Features:

  • 🚀 High performance
  • 🔄 Concurrent requests
  • 🛠️ Built-in extensions
  • 💾 Data export

Quick Example:

import scrapy

class BookSpider(scrapy.Spider):
    name = 'bookspider'
    start_urls = ['http://books.example.com']

    def parse(self, response):
        for book in response.css('.book-item'):
            yield {
                'title': book.css('.title::text').get(),
                'price': book.css('.price::text').get()
            }