If you’ve ever wanted to consolidate multiple IPTV providers, watch live TV from anywhere over the internet, and never have a stream die on you without a backup ready to go — Dispatcharr is the tool you’ve been looking for.
What Is Dispatcharr?
Dispatcharr (pronounced like “dispatcher”) is an open-source IPTV stream manager and proxy. Think of it as the *arr family’s IPTV cousin — simple, smart, and built for reliability. It sits between your IPTV sources and your media players, handling stream routing, automatic failover, EPG guide data, and VOD libraries from a single self-hosted interface.
From the official docs:
Dispatcharr is a IPTV streaming playlist (m3u m3u8) editor and proxy. Additional features, such as robust proxying and support for multiple clients on a single backend stream, make it a comprehensive solution for today’s streaming needs.
It currently sits at nearly 3,000 GitHub stars — a project that started as one person’s personal solution and grew into something the whole self-hosted community is rallying around.
Getting Free IPTV Channels to Use With It
Before we even talk about features — where do you get channels? One of the best free sources is the iptv-org/iptv project on GitHub, a community-maintained collection of publicly available IPTV channels from all over the world. It has over 114,000 stars and is updated daily by a bot.
The main playlist is a single M3U URL you can drop straight into Dispatcharr:
https://iptv-org.github.io/iptv/index.m3u
There are also filtered playlists by country, language, and category — all listed in the project’s PLAYLISTS.md. These are publicly available stream links (not stored video files) and the project operates under a CC0 license. Add it as an M3U account in Dispatcharr and you’ve got international live TV running in minutes.
What Can Dispatcharr Do?
- Stream Proxy & Relay — Proxies IPTV streams with support for multiple clients on a single backend connection. Your provider sees one stream; your whole household watches it.
- Multiple IPTV Provider Support — Add as many M3U accounts or Xtream Codes providers as you want. Dispatcharr unifies them all into a single organized channel list, with per-account max stream limits, refresh schedules, and stream filters.
- EPG Auto-Match — Automatically maps program guide data to your channels. Supports XMLTV sources (including Schedules Direct), Gracenote Station IDs for Emby, and fully customizable dummy EPG entries for channels without guide data.
- Multi-Format Output — Export as M3U, XMLTV EPG, Xtream Codes API, or HDHomeRun device. Plex, Jellyfin, Emby, and ChannelsDVR all discover Dispatcharr as a live TV source natively.
- Multi-User & Access Control — Three user tiers (Admin, Standard, Streamer) with per-user Xtream Codes passwords and channel profile restrictions. Network-based access control by CIDR range for M3U/EPG endpoints, stream URLs, the XC API, and the UI independently.
- Flexible Streaming Backends — Choose from ffmpeg (remux by default, or full custom transcoding parameters), VLC, Streamlink, yt-dlp, or write your own custom command. Each channel can use a different stream profile.
- Plugin System — Build custom integrations and automation workflows using Dispatcharr’s plugin system.
- Fully Self-Hosted — No cloud dependency, no third-party accounts required. Total control.
Automatic Backup Streams & Failover — The Killer Feature
This is what separates Dispatcharr from just pointing your media center at a static M3U file. The official documentation describes it clearly:
Within Dispatcharr, a single channel can be composed of multiple streams. The system initiates playback using the first stream listed in the channel. According to the configured Proxy Settings, Dispatcharr monitors for buffering and, if detected, automatically switches to the next stream in the channel. This process of monitoring and switching continues until all streams are exhausted, ensuring consistent playback quality.
In practice: you stack backup sources on every channel. Provider A starts buffering or goes offline? Dispatcharr silently promotes the next stream in the list — no interruption, no loading spinner, no manual intervention. Combined with something like the free iptv-org playlists as a fallback source, you have remarkable resilience for zero cost.
The buffering detection thresholds (timeout, speed floor, buffer chunk TTL) are all tunable per your setup from the Proxy Settings page.
VOD — Movies and Series from Your Provider
Beyond live TV, Dispatcharr handles Video on Demand through your IPTV provider’s VOD catalog. Once you add “VOD – Movies” or “VOD – Series” groups in the M3U account manager (requires an Xtream Codes account with VOD scanning enabled), Dispatcharr exposes them with rich metadata pulled from IMDB and TMDB. The VOD library is browsable and streamable directly from the interface, with the same Xtream Codes API output your media center already understands.
On the roadmap: granular metadata control, local media library import, and automatic fallback videos for unavailable channels. The project is actively developed with 43 releases since launch.
How I Deploy It on Kubernetes
I run Dispatcharr on my Talos Linux Kubernetes cluster using Ansible. The full stack — Django web app, Celery worker, PostgreSQL, and Redis — is managed by a single Ansible role with a top-level playbook as the entry point.
The Playbook
The playbook is simple. It loads cluster variables, pulls secrets at runtime from whatever vault service you use (HashiCorp Vault, Doppler, AWS Secrets Manager — anything that can populate Ansible variables), then runs the dispatcharr role:
# playbooks/20-deploy_dispatcharr.yml
---
- name: Deploy Dispatcharr IPTV stream manager
hosts: localhost
connection: local
gather_facts: false
vars_files:
- ../group_vars/talos_cluster.yml
roles:
- role: your_secrets_vault # populate dispatcharr_postgres_password,
# dispatcharr_django_secret_key at runtime
- role: dispatcharr
Role Defaults
All tunable values live in defaults/main.yml. Nothing sensitive is here — secrets are injected at runtime from the vault role and never committed to the repo:
# roles/dispatcharr/defaults/main.yml
# Namespace
dispatcharr_namespace: livetv
# Container images
dispatcharr_image: "ghcr.io/dispatcharr/dispatcharr:latest"
postgres_image: "postgres:17-alpine"
redis_image: "redis:7-alpine"
# Networking
dispatcharr_ingress_host: "dispatcharr.yourdomain.com"
dispatcharr_loadbalancer_ip: "<YOUR_LB_IP>"
dispatcharr_port: 9191
# Application settings
dispatcharr_env: "production"
dispatcharr_log_level: "INFO"
dispatcharr_django_settings_module: "dispatcharr.settings"
dispatcharr_pythonunbuffered: "1"
dispatcharr_tz: "America/New_York"
# PostgreSQL connection
dispatcharr_postgres_host: "dispatcharr-db" # Kubernetes Service name
dispatcharr_postgres_port: "5432"
dispatcharr_postgres_db: "dispatcharr"
dispatcharr_postgres_user: "dispatcharr"
# dispatcharr_postgres_password <-- injected from vault at runtime
# Redis connection
dispatcharr_redis_host: "dispatcharr-redis" # Kubernetes Service name
dispatcharr_redis_port: "6379"
dispatcharr_redis_db: "0"
# Persistent storage (Longhorn)
dispatcharr_storage_class: "longhorn"
dispatcharr_postgres_storage: "5Gi"
dispatcharr_redis_storage: "1Gi"
dispatcharr_data_storage: "10Gi"
# Resource limits — PostgreSQL
dispatcharr_postgres_cpu_request: "500m"
dispatcharr_postgres_memory_request: "1024Mi"
dispatcharr_postgres_cpu_limit: "1000m"
dispatcharr_postgres_memory_limit: "2048Mi"
# Resource limits — Redis
dispatcharr_redis_cpu_request: "500m"
dispatcharr_redis_memory_request: "512Mi"
dispatcharr_redis_cpu_limit: "1000m"
dispatcharr_redis_memory_limit: "1024Mi"
# Resource limits — Dispatcharr web (Django)
dispatcharr_web_cpu_request: "2000m"
dispatcharr_web_memory_request: "2048Mi"
dispatcharr_web_cpu_limit: "3000m"
dispatcharr_web_memory_limit: "3096Mi"
# Resource limits — Celery worker
dispatcharr_celery_cpu_request: "2000m"
dispatcharr_celery_memory_request: "512Mi"
dispatcharr_celery_cpu_limit: "3000m"
dispatcharr_celery_memory_limit: "1024Mi"
Role Tasks
The role tasks walk through the full stack in order: namespace → secrets → PVCs → PostgreSQL → Redis → Dispatcharr web → Celery worker → wait for rollout.
Namespace
- name: Create dispatcharr namespace
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ dispatcharr_namespace }}"
labels:
app.kubernetes.io/part-of: dispatcharr
Secrets
Kubernetes Secret objects are created at deploy time from variables the vault role already populated. The values are never in source control:
- name: Create PostgreSQL password Secret
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: dispatcharr-db-secret
namespace: "{{ dispatcharr_namespace }}"
type: Opaque
stringData:
POSTGRES_PASSWORD: "{{ dispatcharr_postgres_password }}"
- name: Create Django secret key Secret
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: dispatcharr-django-secret
namespace: "{{ dispatcharr_namespace }}"
type: Opaque
stringData:
DJANGO_SECRET_KEY: "{{ dispatcharr_django_secret_key }}"
Persistent Volume Claims
Three PVCs on Longhorn — one per stateful component. The apply: false flag prevents Ansible from ever trying to resize or replace a bound PVC:
- name: Create PostgreSQL PVC
kubernetes.core.k8s:
state: present
apply: false
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dispatcharr-postgres-data
namespace: "{{ dispatcharr_namespace }}"
spec:
accessModes: [ReadWriteOnce]
storageClassName: "{{ dispatcharr_storage_class }}"
resources:
requests:
storage: "{{ dispatcharr_postgres_storage }}"
- name: Create Redis PVC
kubernetes.core.k8s:
state: present
apply: false
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dispatcharr-redis-data
namespace: "{{ dispatcharr_namespace }}"
spec:
accessModes: [ReadWriteOnce]
storageClassName: "{{ dispatcharr_storage_class }}"
resources:
requests:
storage: "{{ dispatcharr_redis_storage }}"
- name: Create Dispatcharr data PVC
kubernetes.core.k8s:
state: present
apply: false
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dispatcharr-data
namespace: "{{ dispatcharr_namespace }}"
spec:
accessModes: [ReadWriteOnce]
storageClassName: "{{ dispatcharr_storage_class }}"
resources:
requests:
storage: "{{ dispatcharr_data_storage }}"
PostgreSQL StatefulSet
PostgreSQL runs as a StatefulSet with OrderedReady pod management and a 60-second termination grace period. This matters with Longhorn RWO volumes — without it, the descheduler can evict the pod and the replacement can’t attach the volume because the old pod hasn’t finished detaching. OrderedReady guarantees the old pod is completely gone before the new one starts.
- name: Create PostgreSQL headless Service (for StatefulSet)
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: dispatcharr-db-headless
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-db
app.kubernetes.io/part-of: dispatcharr
spec:
clusterIP: None
selector:
app: dispatcharr-db
ports:
- name: postgres
port: 5432
targetPort: postgres
- name: Deploy PostgreSQL
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dispatcharr-db
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-db
app.kubernetes.io/part-of: dispatcharr
spec:
serviceName: dispatcharr-db-headless
replicas: 1
podManagementPolicy: OrderedReady
selector:
matchLabels:
app: dispatcharr-db
template:
metadata:
labels:
app: dispatcharr-db
app.kubernetes.io/part-of: dispatcharr
spec:
terminationGracePeriodSeconds: 60
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/part-of
operator: In
values: [dispatcharr]
topologyKey: kubernetes.io/hostname
containers:
- name: postgres
image: "{{ postgres_image }}"
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_DB
value: "{{ dispatcharr_postgres_db }}"
- name: POSTGRES_USER
value: "{{ dispatcharr_postgres_user }}"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dispatcharr-db-secret
key: POSTGRES_PASSWORD
ports:
- name: postgres
containerPort: 5432
resources:
requests:
cpu: "{{ dispatcharr_postgres_cpu_request }}"
memory: "{{ dispatcharr_postgres_memory_request }}"
limits:
cpu: "{{ dispatcharr_postgres_cpu_limit }}"
memory: "{{ dispatcharr_postgres_memory_limit }}"
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: dispatcharr-postgres-data
- name: Create PostgreSQL Service
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: "{{ dispatcharr_postgres_host }}"
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-db
app.kubernetes.io/part-of: dispatcharr
spec:
selector:
app: dispatcharr-db
ports:
- name: postgres
port: 5432
targetPort: 5432
Redis StatefulSet
Same pattern as PostgreSQL — StatefulSet with OrderedReady, 60-second graceful shutdown (so Redis can flush AOF/RDB cleanly), and the same anti-affinity rule:
- name: Create Redis headless Service (for StatefulSet)
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: dispatcharr-redis-headless
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-redis
app.kubernetes.io/part-of: dispatcharr
spec:
clusterIP: None
selector:
app: dispatcharr-redis
ports:
- name: redis
port: 6379
targetPort: redis
- name: Deploy Redis
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dispatcharr-redis
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-redis
app.kubernetes.io/part-of: dispatcharr
spec:
serviceName: dispatcharr-redis-headless
replicas: 1
podManagementPolicy: OrderedReady
selector:
matchLabels:
app: dispatcharr-redis
template:
metadata:
labels:
app: dispatcharr-redis
app.kubernetes.io/part-of: dispatcharr
spec:
terminationGracePeriodSeconds: 60
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/part-of
operator: In
values: [dispatcharr]
topologyKey: kubernetes.io/hostname
containers:
- name: redis
image: "{{ redis_image }}"
ports:
- name: redis
containerPort: 6379
resources:
requests:
cpu: "{{ dispatcharr_redis_cpu_request }}"
memory: "{{ dispatcharr_redis_memory_request }}"
limits:
cpu: "{{ dispatcharr_redis_cpu_limit }}"
memory: "{{ dispatcharr_redis_memory_limit }}"
volumeMounts:
- name: redis-data
mountPath: /data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: dispatcharr-redis-data
- name: Create Redis Service
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: "{{ dispatcharr_redis_host }}"
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-redis
app.kubernetes.io/part-of: dispatcharr
spec:
selector:
app: dispatcharr-redis
ports:
- name: redis
port: 6379
targetPort: 6379
Dispatcharr Web Deployment
The web pod uses a Recreate strategy (no rolling update) because the data PVC is ReadWriteOnce — two replicas can’t mount the same Longhorn volume simultaneously. Same anti-affinity rule ensures it lands on its own node:
- name: Deploy Dispatcharr Web
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dispatcharr-web
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-web
app.kubernetes.io/part-of: dispatcharr
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: dispatcharr-web
template:
metadata:
labels:
app: dispatcharr-web
app.kubernetes.io/part-of: dispatcharr
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/part-of
operator: In
values: [dispatcharr]
topologyKey: kubernetes.io/hostname
containers:
- name: dispatcharr-web
image: "{{ dispatcharr_image }}"
ports:
- name: http
containerPort: "{{ dispatcharr_port }}"
env:
- name: DISPATCHARR_ENV
value: "{{ dispatcharr_env }}"
- name: DISPATCHARR_LOG_LEVEL
value: "{{ dispatcharr_log_level }}"
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: dispatcharr-django-secret
key: DJANGO_SECRET_KEY
- name: DJANGO_SETTINGS_MODULE
value: "{{ dispatcharr_django_settings_module }}"
- name: PYTHONUNBUFFERED
value: "{{ dispatcharr_pythonunbuffered }}"
- name: POSTGRES_HOST
value: "{{ dispatcharr_postgres_host }}"
- name: POSTGRES_PORT
value: "{{ dispatcharr_postgres_port }}"
- name: POSTGRES_DB
value: "{{ dispatcharr_postgres_db }}"
- name: POSTGRES_USER
value: "{{ dispatcharr_postgres_user }}"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dispatcharr-db-secret
key: POSTGRES_PASSWORD
- name: REDIS_HOST
value: "{{ dispatcharr_redis_host }}"
- name: REDIS_PORT
value: "{{ dispatcharr_redis_port }}"
- name: REDIS_DB
value: "{{ dispatcharr_redis_db }}"
- name: TZ
value: "{{ dispatcharr_tz }}"
resources:
requests:
cpu: "{{ dispatcharr_web_cpu_request }}"
memory: "{{ dispatcharr_web_memory_request }}"
limits:
cpu: "{{ dispatcharr_web_cpu_limit }}"
memory: "{{ dispatcharr_web_memory_limit }}"
volumeMounts:
- name: dispatcharr-data
mountPath: /data
volumes:
- name: dispatcharr-data
persistentVolumeClaim:
claimName: dispatcharr-data
- name: Create Dispatcharr Web Service (LoadBalancer)
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: dispatcharr-web
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-web
app.kubernetes.io/part-of: dispatcharr
annotations:
kube-vip.io/loadbalancerIPs: "{{ dispatcharr_loadbalancer_ip }}"
spec:
type: LoadBalancer
selector:
app: dispatcharr-web
ports:
- name: http
port: 80
targetPort: "{{ dispatcharr_port }}"
- name: Create Dispatcharr HTTPRoute (Gateway API)
kubernetes.core.k8s:
state: present
definition:
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: dispatcharr
namespace: "{{ dispatcharr_namespace }}"
labels:
app.kubernetes.io/part-of: dispatcharr
spec:
parentRefs:
- name: default-gateway
namespace: gateway-api
sectionName: websecure
hostnames:
- "{{ dispatcharr_ingress_host }}"
rules:
- backendRefs:
- name: dispatcharr-web
port: 80
Celery Worker Deployment
The Celery worker runs the same image as the web pod but with a different entrypoint. It handles all background work: M3U refreshes, EPG parsing, stream health checks. Same anti-affinity rule — it gets its own node separate from the web pod, PostgreSQL, and Redis:
- name: Deploy Dispatcharr Celery Worker
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: dispatcharr-celery
namespace: "{{ dispatcharr_namespace }}"
labels:
app: dispatcharr-celery
app.kubernetes.io/part-of: dispatcharr
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: dispatcharr-celery
template:
metadata:
labels:
app: dispatcharr-celery
app.kubernetes.io/part-of: dispatcharr
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/part-of
operator: In
values: [dispatcharr]
topologyKey: kubernetes.io/hostname
containers:
- name: dispatcharr-celery
image: "{{ dispatcharr_image }}"
command: [celery]
args: [-A, dispatcharr, worker, -l, info]
env:
- name: DISPATCHARR_ENV
value: "{{ dispatcharr_env }}"
- name: DISPATCHARR_LOG_LEVEL
value: "{{ dispatcharr_log_level }}"
- name: DJANGO_SECRET_KEY
valueFrom:
secretKeyRef:
name: dispatcharr-django-secret
key: DJANGO_SECRET_KEY
- name: DJANGO_SETTINGS_MODULE
value: "{{ dispatcharr_django_settings_module }}"
- name: PYTHONUNBUFFERED
value: "{{ dispatcharr_pythonunbuffered }}"
- name: POSTGRES_HOST
value: "{{ dispatcharr_postgres_host }}"
- name: POSTGRES_PORT
value: "{{ dispatcharr_postgres_port }}"
- name: POSTGRES_DB
value: "{{ dispatcharr_postgres_db }}"
- name: POSTGRES_USER
value: "{{ dispatcharr_postgres_user }}"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: dispatcharr-db-secret
key: POSTGRES_PASSWORD
- name: REDIS_HOST
value: "{{ dispatcharr_redis_host }}"
- name: REDIS_PORT
value: "{{ dispatcharr_redis_port }}"
- name: REDIS_DB
value: "{{ dispatcharr_redis_db }}"
- name: TZ
value: "{{ dispatcharr_tz }}"
resources:
requests:
cpu: "{{ dispatcharr_celery_cpu_request }}"
memory: "{{ dispatcharr_celery_memory_request }}"
limits:
cpu: "{{ dispatcharr_celery_cpu_limit }}"
memory: "{{ dispatcharr_celery_memory_limit }}"
Wait for Rollout
The role waits for PostgreSQL (up to 5 minutes) and then the web deployment (up to 10 minutes) before reporting success. ignore_errors: true means a timeout won’t fail the whole play — the debug task at the end tells you where things landed:
- name: Wait for PostgreSQL to become available
kubernetes.core.k8s_info:
api_version: apps/v1
kind: StatefulSet
name: dispatcharr-db
namespace: "{{ dispatcharr_namespace }}"
register: _dispatcharr_db
until: >
_dispatcharr_db.resources | length > 0 and
(_dispatcharr_db.resources[0].status.availableReplicas | default(0)) >= 1
retries: 30
delay: 10
ignore_errors: true
- name: Wait for Dispatcharr Web to become available
kubernetes.core.k8s_info:
api_version: apps/v1
kind: Deployment
name: dispatcharr-web
namespace: "{{ dispatcharr_namespace }}"
register: _dispatcharr_web
until: >
_dispatcharr_web.resources | length > 0 and
(_dispatcharr_web.resources[0].status.availableReplicas | default(0)) >= 1
retries: 60
delay: 10
ignore_errors: true
- name: Report Dispatcharr deployment status
ansible.builtin.debug:
msg:
- "======================================================"
- " Dispatcharr Deployed!"
- "======================================================"
- " UI: https://{{ dispatcharr_ingress_host }}"
- " Namespace: {{ dispatcharr_namespace }}"
- "======================================================"
Keeping Every Pod on Its Own Node
Dispatcharr does real, sustained work. The web pod proxies live video streams. Celery handles background tasks: M3U refreshes, EPG parsing, stream health checks. PostgreSQL processes incoming channel and guide data. When any of these spike in CPU — and they do — I don’t want it affecting workloads on the same physical node.
Every pod in the stack carries the label app.kubernetes.io/part-of: dispatcharr, and every pod has this anti-affinity rule in its spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app.kubernetes.io/part-of
operator: In
values: [dispatcharr]
topologyKey: kubernetes.io/hostname
The requiredDuringSchedulingIgnoredDuringExecution constraint is a hard rule — the Kubernetes scheduler will refuse to place any two Dispatcharr pods on the same node. Web app, Celery worker, PostgreSQL, Redis: each on its own node, guaranteed.
This is different from the softer preferredDuringSchedulingIgnoredDuringExecution variant, which is just a hint the scheduler can ignore under pressure. required means it. If there aren’t enough nodes, the pod stays pending — I’d rather see a scheduling failure than silently end up with noisy neighbors starving each other for CPU.
Pair that with explicit resource limits on every container (defined in defaults/main.yml above), and when Celery goes wide open during an M3U refresh of tens of thousands of streams, it cannot starve the other workloads on that node — and it can’t even share a node with the web pod or database. Node anti-affinity plus resource limits: the noisy neighbor problem is solved at the scheduling layer, not patched over after the fact.
Bonus: I Built a dispatcharr-mcp Server
Once Dispatcharr was running, I wanted to manage it through AI — check what’s streaming, force a failover, manage channels — all from a conversation. So I built dispatcharr-mcp (v0.1.1): a Model Context Protocol server that exposes Dispatcharr’s full REST API as tools an AI agent can call.
It’s a Python package that supports two authentication modes:
- API Key (recommended) — set
DISPATCHARR_API_KEYand you’re done. Stateless, no token expiry, no re-login overhead. Generate one in Dispatcharr UI → System → Users. - JWT username/password (fallback) — set
DISPATCHARR_USERNAMEandDISPATCHARR_PASSWORD. Tokens are fetched lazily and refreshed automatically.
Tools are organized by domain:
- Channels — list, search, create, update, delete channels and groups
- Streams — browse raw M3U streams from your providers
- Proxy — live stream control: change stream, stop stream, force failover
- EPG — manage guide data sources and programme schedules
- M3U Accounts — add providers, trigger refreshes
- VOD — browse movies, series, and episodes
- System — stream profiles, settings, system events
Configure it with your Dispatcharr URL and API key (or username/password), connect it to Claude Desktop, VS Code, Cline, or any MCP-compatible client, and you can do things like:
“What channels are actively streaming right now and how many clients are on each?”
“ESPN HD is buffering — switch it to the next available stream.”
“Add the iptv-org free playlist as a new M3U account and refresh it.”
“Show me all the action movies in my VOD library.”
“Find all channels with no EPG assigned.”
Your AI assistant becomes a full control plane for your IPTV setup.
Dispatcharr is on GitHub, with full docs at dispatcharr.github.io/Dispatcharr-Docs and an active Discord community. If you’re tired of wrestling with static M3U files and want actual reliability from your IPTV setup, it’s worth an afternoon.