🥳 🔠 We are releasing a major update to our telemetry tracking - B.Y.O.M.ID - Bring Your Own Monitor IDs!
The fundamental basis of integrating with Cronitor is using a monitor's globally unique ID to send telemetry data about a job's execution using the Ping API. While conceptually simple, this can make integrating with Cronitor cumbersome because it requires first using the dashboard or API to create a monitor in order generate a new monitor ID.
Using the new Universal Ping URL you can assign your own unique (to your account) identifier for a monitor. When a new ID is detected, a new monitor is created automatically.
The above URL contains your Ping API key, and calling it would create a new monitor named "A Unique Monitor". Rules and alert preferences could then be attached from the dashboard.
🐍🐍 Cronitor is written (mostly) in Python, so we were excited to recently move the most popular Python client (and the cronitor package on PyPi) into our Github organization, and begin contributing to its development.
Today we're launching the 2.0 version of this client. The API has changed to make it easier to work with provisioning/deployment pipelines, and will be used as a model for updates to our other clients. It also comes with something we've wanted for ourselves — a dead simple way to automagically monitor any Python function.
from cronitor import ping @ping("An Important Function") def main(): ...
⏸ ▶️ We have released an update to our core alert delivery logic. Prior to this change, once a monitor began failing, and as long as it remained in a failing state, a follow up alert would be delivered on a pre-defined interval (8 hours by default). This could (and did) result in sending alerts for years on dead monitors.
With this release monitors will be automatically pause sending alerts after 10 consecutive alerts. If/when the monitor begins receiving the expected telemetry pings again, the expected recovery notification will be delivered, and the monitor will resume sending alerts per its rules.
✉️ 📊 Rollup reporting has been one of the most asked for features this year, and after three users requested it in a two week period we decided to bump the priority and ship this quickly.
Today we're launching summary reports. These reports will...ahem...summarize...your monitor activity (failures, new monitors, etc) for a given time period. If you're interested in receiving these reports via slack/teams instead of email, please reach out.
📅💡 CloudWatch Events are a great way to bring cron based scheduling to an event driven systems built onto of AWS's SNS/SQS/Lambda infrastructure.
CloudWatch events present an interesting monitoring challenge though. They use a 6 digit cron expression
* * * * * * instead of the standard 5 digits. This is similar to Java Cron (Quartz, Spring, etc), however unlike Java Cron where the 6th is added to the left side of the expression and means
seconds, with Cloudwatch Events the 6th digit is added to the right side of the expression and signifies
In order to correctly monitor CloudWatch Events we've added a special monitor type that correctly parses the 6th digit as
years instead of
seconds. You can select it from the dashboard or use
"monitor_type": "cloudwatch" when creating a monitor via the API.
🔥📉 When tuning load times there's nothing more statifying than a chart that's down and to the right!
Refactoring our approach to calculating monitor status in bulk (e.g. opening your cronitor dashboard) has improved load times by a factor of 3-5x for accounts with 50+ monitors.
❤️📄 We're grateful to all the teams out there that take the time to write great product documentation for us, and we're committed to doing the same for our users.
Our new docs landing page has been reorganized to help you quickly find the information you need.
🔍📟 We've made it easy to add an alerting system to your APIs! By creating assertions on the key/value pairs in the response body itself you can quickly add monitoring to any API.
We've been using it to monitor queue lengths, alert counts, and a number of other key metrics that we've written internal APIs for.
🌐 Select regions for your Website, API and Server monitoring! This feature makes it easier to prevent false alarms by deploying monitoring close to your actual users.
Website and API monitors can also now use a
Does Not Contain assertion.
🔒 Monitor your private S3 buckets to ensure they stay private. Cronitor will alert you instantly if they become publicly accessible.
🔓You can also monitor the availability and health of your public S3 buckets with assertions like
cronitor discover. With this update you can easily skip a job import with
cronitor execthat could cause output failures and, in certain cases, even a process deadlock.
Improved navigation and design of the account settings page! In fact, this is the first change to the design of the account settings page since Cronitor launched in 2014.
Over the years we managed to cram a lot of features into that page, and as a result it became harder and harder to use. All settings are now grouped under logical sections (Team, Billing, Integrations, etc), and each section is linkable via the URL.
Monitors can now ping using email! Just send an email to
Email pings are great ways to validate that your emails are being delivered - create a monitor to alert you if it doesn't receive an email on a given schedule or within a certain time interval. It can also be a handy way to send telemetry (ping) data from servers that can't make outbound HTTP requests, but are configured to allow SMTP requests..
To call different monitor events append the event name with a
+ as shown below.
Java Cron Job,
AWS Scheduled Eventand more.
Ping APIhas improved ping matching logic that computes durations even when job instances overlap.
discoverwas not saving user crontabs on RHEL
discoverwill prefill monitor names when you are re-running the command
/okendpoint to reset a monitor back to healthy status until the next scheduled run.
dashboardpause feature will automatically send an
/okping when your monitor is unpaused.
discoverwill no longer create monitors for meta cron jobs like
dashboardwill no longer display monitors for meta cron jobs like
discoverwill not crash when hostnames are longer than 55 characters
discovernow includes the cron job administration tools
discoverwill no longer create monitors with duplicate names in some edge conditions
discoverwill display an interactive prompt even when no
discovernow supports the editing of auto-generated names at the interactive prompt
discoverwill prompt a user to save updated crontab if no automatic
--saveflag is supplied
discovercan now be supplied a path to a single crontab or a directory of crontabs.
discoverwill automatically find jobs in
/etc/cron.dwhen run as a root user.
discoverinteractive shell is cleaner and more descriptive.
execwill no longer strip newline characters from stdout and stderr.
execwill relay signals. Signals will now be ignored by the
execprocess and sent directly to the subprocess. Note that in Unix-based operating systems like Linux SIGKILL cannot be caught or relayed.
execwill return stdout and stderr from subprocess as a combined stdout stream. Before this change the output was buffered until the subprocess exited.
execwill relay the exit code from the subprocess. If your subprocess returns code 127,
execwill return 127.
execwill no longer append a newline when writing stdout from the subprocess.
ops.defaultcouldn't be deleted.
discoverincrease timeout to accommodate crontab files with hundreds of jobs.
NotRunAtrule type for heartbeat monitors. With this rule type you will be alerted if a ping is not sent by a certain time each day.
discovercan accept a
execwill now set the environment variable
CRONITOR_EXEC=1so a process can know if it's being run under exec.
discoverinteractive mode to easily customize monitor names during crontab import.
discoverwill automatically find your user crontab if no file is supplied.