Released: Jan 15, 2026
Prepared by: Operations with assistance from Engineering
Effective: Jan 15, 2026
Improved resilience of backend services when dependent components (such as in‑memory data stores) experience connectivity issues, reducing the chance of stuck or unresponsive requests.
Enhanced monitoring and alerting for SSL/TLS certificate renewals so that expiring certificates are proactively detected and surfaced to engineering before they can impact availability.
Added detailed metrics and dashboards for background workflow steps and audio‑processing pipelines, making it easier to track processing times and quickly identify bottlenecks.
Expanded tracing coverage across streaming and batch audio services, web sockets, command‑line tools, and development environments to provide end‑to‑end visibility of requests.
Optimized backend workflows for continuous and ambient audio monitoring so that long‑running sessions are handled more efficiently and with more robust timeout behavior.
Simplified and cleaned up gateway and ingress configurations, consolidating them onto a more modern pattern that relies on automatic certificate management and reduces configuration drift.
Performed regular dependency and platform upgrades (including workflow orchestration and container tooling) to keep the stack current, secure, and aligned with best practices.
Strengthened backup and recovery by extending retention and ensuring backups are stored in durable, persistent storage for easier retrieval when needed.
Improved internal data pipelines and proof‑of‑concept integrations to get assessment data into analytics tools in a more consistent and report‑friendly format, supporting faster debugging and richer reporting.
Introduced model‑invocation tracking to record which models are used for each assessment and by which client and project. This supports more complete usage reporting and transparent analytics.
Refined documentation and examples for bulk data transfer and ingestion workflows so that partners can more easily move large volumes of audio and assessment data into the platform.
Conducted initial performance and load‑testing work for the v3 APIs to better understand system behavior under higher traffic and prepare for increased usage.
Extended common data‑query capabilities to support more complex filtering over time ranges and assessment identifiers, especially for continuous and ambient monitoring scenarios.
Enhanced documentation for continuous monitoring, including clearer guidance on session behavior and timeouts, to help integrators design robust long‑running monitoring flows.
Produced a comprehensive implementation guide for continuous monitoring that covers best practices, timing considerations, and client‑side voice activity detection (VAD), with example approaches in common programming languages.
Updated API documentation to correctly describe response payloads and field names, ensuring that integrators see the exact structures returned by the service.
Clarified implementation guidance around survey and workflow interactions, workflow testing, and configuration expectations so that changes can be validated earlier in the development lifecycle.
Created or refined internal architecture diagrams and design documents for key subsystems (such as streaming integrations and ambient monitoring), providing clearer references for future enhancements.
Addressed high‑priority infrastructure and container vulnerabilities identified by security tooling, reducing the overall attack surface.
Strengthened monitoring of load balancers and application gateways with standardized metrics and alerts for error responses, providing earlier visibility into potential issues.
Improved logging and observability for cloud‑native and serverless components, making it easier to detect anomalies and confirm that logs are correctly flowing into centralized monitoring tools.
Tightened controls around third‑party integrations and collaboration tools by reviewing and removing unneeded access, and blocking unapproved integrations where appropriate.
Deployed an updated wellness model and aligned internal score distributions so that wellness scores are more consistent across languages and regions.
Introduced a new cognitive model for additional language support and extended scoring logic so that cognitive screening results can be surfaced more broadly where appropriate.
Advanced the Canary AI Wellness watch and phone experience, including:
Implementing end‑to‑end flows for login, recording, permissions, and results across watch and phone.
Integrating health data such as sleep and steps to provide additional context around vocal wellness metrics.
Refining wording and presentation in the app to use clearer labels (for example, “Canary Wellness,” “Canary Stress,” “Canary Mood,” and “Canary Vocal Energy”) and more intuitive phrasing like “Average Steps per Day.”
Aligning timers and visual progress indicators on the watch so that countdowns and progress rings stay in sync.
Adding subtle diagnostic information (such as internal session identifiers) that aids support and debugging without impacting the user experience.
Documented the setup and configuration steps required to get the watch and phone apps working together, making it easier for teams to enable new users.
Enabled and refined the website support chat widget to provide real‑time assistance directly from the public website once enabled by the support team.
Improved internal documentation and tooling for managed device setup (such as provisioned tablets and managed app configurations), simplifying deployment into clinical or research environments.
Deployment and Access:
Always keep your application up to date by enabling automatic updates. We will take care of the rest.
For support or additional details, please contact:
Email: support@canaryspeech.com