A tiktok by @github.awesome — researched and verified by Depth
xyOps is a complete, self-hosted replacement for crontab that unifies job scheduling, real-time server monitoring, and built-in ticketing into one platform — automatically capturing a full server snapshot and opening a ticket with logs and metrics whenever a job crashes.
xyOps is an open-source, self-hosted workflow automation and server monitoring platform built by Joseph Huckaby (PixlCore LLC). It is the direct spiritual successor to Cronicle, the same author's earlier cron-replacement project.
ghcr.io/pixlcore/xyops:latestxyOps is described as "a next-generation system for job scheduling, workflow automation, server monitoring, alerting, and incident response — all combined into a single, cohesive platform." It does not hide features behind paywalls or push telemetry to any third party.
xyOps runs as a conductor (the primary scheduler/UI server) and one or more xySat satellites (lightweight agents installed on worker servers). The conductor is distributed as a Docker container; satellites can be installed on Linux, macOS, and Windows. The system is built on Node.js LTS and uses a JSON-over-STDIO Plugin API, meaning plugins can be written in any language without an SDK.
Job Scheduling:
- Full crontab import support, one-time jobs, interval-based triggers, blackout windows, and precision scheduling options.
- Parallel job execution with configurable max-parallel limits and queuing.
- Self-imposed runtime constraints: CPU limits, memory limits, max output size, and retry/queue controls.
- A graphical workflow editor lets you connect events, triggers, actions, and monitors into visual pipelines.
Server Monitoring:
- Minute-level time-series metrics (CPU, memory, network, disk, and log tracking) per job and per server.
- Historical performance graphs from hourly to yearly.
- Custom monitor expressions using JEXL-based syntax.
- Server and group-level dashboards.
Alerting:
- All alerts include a snapshot of the server state at the moment of firing.
- Alert emails include the list of running jobs on that server at the time.
- One click from an alert opens a full snapshot showing every process, CPU load, and network connection.
- Supports email, webhook, and custom notification channels. Roadmap includes one-click templates for Slack, Discord, Pushover, and ntfy.
Ticketing (Incident Response):
- Built-in lightweight ticketing system integrated directly with jobs, alerts, files, and automation.
- When a job fails, xyOps can automatically open a ticket with full context: logs, history, and linked metrics.
- Tickets can attach files and trigger jobs (useful for CI/CD remediation pipelines).
- Fully scriptable via REST API.
Snapshots:
- Point-in-time captures of server or group state for forensics and comparisons.
- Snapshots are linked to alerts and tickets, creating a traceable incident timeline.
xyOps is explicitly the "spiritual successor to Cronicle" (confirmed in the Cronicle README by the same author). Cronicle will continue to receive bug fixes and security patches, but new feature development is happening in xyOps. A migration path from Cronicle to xyOps is documented in the xyOps docs under the "Cronicle" section.
xyOps includes a formal LONGEVITY.md committing that the project will always remain under an OSI-approved open-source license, will never be superseded by a proprietary fork, and will be submitted to independent archival services. The author explicitly states: "No rug pulls."
# One-liner to spin up xyOps locally
docker run --detach --init --restart unless-stopped \
-v xy-data:/opt/xyops/data \
-v /var/run/docker.sock:/var/run/docker.sock \
-e TZ="America/Los_Angeles" \
-e XYOPS_xysat_local="true" \
-p 5522:5522 \
-p 5523:5523 \
--name "xyops01" \
--hostname "xyops01" \
ghcr.io/pixlcore/xyops:latest
Then open http://localhost:5522 in your browser.
- Username: admin
- Password: admin
The
XYOPS_xysat_local="true"flag tells xyOps to run a local satellite agent inside the same container, so you can immediately schedule and monitor jobs on the host machine without installing a separate agent.
# From the xyOps UI: click "Add Server" → copy the one-liner installer
# It looks like this (replace YOUR_XYOPS_SERVER and AUTH_TOKEN):
curl -s "http://YOUR_XYOPS_SERVER:5522/api/app/satellite/install?t=AUTH_TOKEN&os=linux" | sudo sh
git clone https://github.com/pixlcore/xyops.git
cd xyops
npm install
node bin/build.js dev
bin/debug.sh
xyOps has native crontab import support. In the UI, go to Events → Import Crontab and paste your existing crontab entries. xyOps will convert them to managed events with full monitoring.
xyOps is not a random new tool — it is Cronicle v2, built by the same author (Joseph Huckaby). Cronicle has been a well-known cron replacement in the self-hosting community for years. If you're already a Cronicle user, there is a documented migration path. If you're evaluating xyOps, knowing its lineage explains why it's already polished at v1.0.
The project explicitly states it does not accept feature pull requests. You can contribute bug reports, documentation, and plugins, but the core feature roadmap is controlled by PixlCore LLC. This is a deliberate governance choice, not an oversight.
The managed xyOps Cloud service and the Enterprise Plan (for on-prem air-gapped installs) are both listed as "coming soon" as of March 2026. For now, self-hosting via Docker is the only option. Production deployments require you to manage TLS, storage backends (local disk, MinIO/S3-compatible), and HA yourself.
For production, xyOps recommends MinIO as an external storage backend. However, as of February 2026, MinIO's open-source repository has been archived, and MinIO stopped publishing community Docker containers in October 2025. The xyOps docs acknowledge this and are evaluating RustFS as a replacement. This is a real operational risk for new production deployments.
xyOps monitors CPU, memory, network, and disk at the job and server level, but it is not a replacement for Prometheus+Grafana, Datadog, or ELK for deep application-level observability. Its monitoring is ops-level ("is this server healthy?"), not application-level ("what is my p99 latency for this endpoint?").
jhuckaby/Cronicle) — The predecessor, MIT licensed, still maintained for bug fixes. Simpler, no ticketing or snapshots.healthchecks/healthchecks), SaaS available. Monitors whether jobs ran, not what they consumed. Free tier: 20 monitors.aptible/supercronic) — Container-native cron replacement (Go binary), logs to stdout/stderr, graceful SIGTERM handling. No UI, no monitoring, no ticketing. Pure scheduler for containers.xyOps supports crontab import and cron-expression scheduling, but it is a full platform (conductor + satellite agents + web UI) rather than a drop-in binary replacement; it requires Docker or Node.js to run, not just editing /etc/crontab.
— SourceThe official site explicitly lists CPU, memory, network, disk, and log tracking per job, plus built-in ticketing for incident response, all in a single platform.
— SourceThe official feature list confirms 'CPU, memory, network, disk, and log tracking per job' with the ability to enforce limits such as CPU and memory usage per job.
— SourceConfirmed: alerts include a server snapshot, and xyOps can automatically open a ticket with full context (logs, history, linked metrics) when a job fails; snapshots are described as 'point-in-time captures of server or group state for forensics.'
— SourceRun the Docker quickstart locally (docker run ... ghcr.io/pixlcore/xyops:latest) and import an existing crontab via Events → Import Crontab to validate the migration path firsthand
The 60-second Docker setup lets you immediately assess UI quality, crontab conversion fidelity, and the snapshot/ticket workflow before committing any production resources
Deliberately trigger a job failure in the test environment and document the full automatic ticket creation flow — what fields are populated, what logs are captured, and how long the snapshot takes to generate
The crash-snapshot-to-ticket pipeline is the core differentiating claim; validating it empirically confirms whether the mean-time-to-diagnosis benefit is real and how complete the captured context actually is
Audit your current production crontab entries and categorize them by criticality, then map each to xyOps event types (one-time, interval, cron expression) to produce a concrete migration inventory before touching production
A pre-migration inventory surfaces edge cases (blackout windows, chained jobs, environment variable dependencies) that could break silently during import and need manual remediation
Evaluate RustFS as a MinIO replacement by standing up a test instance and confirming xyOps storage backend compatibility before designing any production architecture
MinIO's community Docker containers were discontinued in October 2025 and its OSS repo is archived — building a production deployment on MinIO today creates a real operational dead-end; validating RustFS now avoids a forced migration later
Install an xySat satellite on one non-critical remote Linux server and verify the one-liner installer, agent connectivity, and per-server dashboard metrics populate correctly
Multi-server monitoring is central to the platform's value proposition; testing satellite deployment on a real remote host exposes network, firewall, and auth issues before rolling out to the full fleet
Define the boundary between xyOps monitoring and any existing Prometheus/Grafana or APM stack in a written decision record — explicitly documenting which signals each tool owns to prevent alert duplication and coverage gaps
xyOps is ops-level (server health, job state) not application-level (p99 latency, error rates); without a clear boundary, teams either duplicate instrumentation or develop false confidence that xyOps replaces deeper observability
Write a JEXL-based custom monitor expression for your highest-criticality job (e.g., alert if CPU stays above 80% for 3 consecutive minutes during a job run) and test the full alert → snapshot → ticket chain end-to-end
Custom monitor expressions are the mechanism for production-grade alerting; exercising this path validates that the JEXL syntax is expressive enough for your actual thresholds and that the notification channels (email/webhook) deliver reliably
Design and document a production HA architecture for xyOps — covering TLS termination, external storage backend (RustFS/S3-compatible), backup strategy for the conductor, and satellite reconnection behavior during conductor downtime
xyOps Cloud and Enterprise managed options are not yet available; production self-hosting requires you to solve HA, TLS, and data durability yourself, and doing this design work before deployment prevents costly architectural rework
Build a proof-of-concept plugin in your team's primary scripting language (Python, Bash, or Go) using the JSON-over-STDIO Plugin API to confirm the language-agnostic integration claim and establish an internal plugin development pattern
The no-SDK plugin model is a key adoption enabler; validating it with a real internal use case (e.g., a database backup job with structured output) confirms the integration story and produces a reusable template for future plugins
Subscribe to the xyOps GitHub releases feed and set a calendar reminder to re-evaluate the MinIO/RustFS storage situation and the Cloud/Enterprise plan availability in Q3 2026
Two significant unknowns — production storage backend stability and managed hosting options — are expected to resolve within 2026; tracking them prevents being caught off-guard by breaking changes or missing a simpler deployment path
Want research like this for any video?
Save a link, get back verified intelligence.