Cloudinary
All Systems Operational

About This Site

Cloudinary is an end-to-end cloud-based image and video management solution for your Web and mobile applications.
Cloudinary's service status:

US Datacenter Operational
Media Transformation API - US Operational
Upload API - US Operational
Admin API - US Operational
EU Datacenter Operational
Media Transformation API - EU Operational
Upload API - EU Operational
Admin API - EU Operational
AP Datacenter Operational
Media Transformation API - AP Operational
Upload API - AP Operational
Admin API - AP Operational
Media Delivery Operational
Digital Asset Management (DAM) Operational
Management Console Operational
Website & Documentation Operational
Upload Widget Operational
Product Gallery Widget Operational
Media Library Widget Operational
Video Player Operational
Media Editing Widget Operational
Third-Party - Add-ons Operational
Support Site Operational
Third-Party - Amazon Web Services Operational
AWS ec2-us-east-1 Operational
AWS s3-us-standard Operational
AWS sqs-us-east-1 Operational
AWS ec2-eu-west-1 Operational
AWS ec2-ap-southeast-1 Operational
Integrations Operational
Adobe Creative Cloud Connector Operational
WordPress Plugin Operational
Magento Extension Operational
SalesForce CC Page Designer Cartridge Operational
SalesForce CC Site Cartridge Operational
Billing Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Past Incidents
Sep 23, 2021

No incidents reported today.

Sep 22, 2021

No incidents reported.

Sep 21, 2021

No incidents reported.

Sep 20, 2021

No incidents reported.

Sep 19, 2021

No incidents reported.

Sep 18, 2021

No incidents reported.

Sep 17, 2021

No incidents reported.

Sep 16, 2021

No incidents reported.

Sep 15, 2021

No incidents reported.

Sep 14, 2021

No incidents reported.

Sep 13, 2021

No incidents reported.

Sep 12, 2021

No incidents reported.

Sep 11, 2021

No incidents reported.

Sep 10, 2021

No incidents reported.

Sep 9, 2021
Resolved - The issue was called by an erroneous configuration update in our CDN layer. This caused a massive increase in requests to our processing cluster which failed to scale fast enough due to unusual circumstances. After the issue was discovered, we initiated manual scale-up as well as rollback of the CDN configuration update. The incident started at 14:42 UTC and until 15:00 caused increased request latency and a significant amount of errors, By 15:00 UTC the errors were contained, and by 15:14 UTC, the system latency returned back to normal.
Sep 9, 13:07 EDT
Monitoring - The issues have been corrected and we are monitoring the results. More details to follow.
Sep 9, 11:23 EDT
Investigating - We are currently investigating this issue, more details to follow
Sep 9, 11:11 EDT