World is now on Opti ID! Learn more

Scott Reed
Nov 28, 2018
  28
(0 votes)

Scheduled jobs in the DXC & The Autoheal Policy & Architecture

Scheduled jobs are a great way to write code that can run periodically and perform actions on your CMS or Commerce system through the Episerver API framework https://world.episerver.com/documentation/developer-guides/CMS/scheduled-jobs/ I have often used these to much success but in our latest project we needed to do some heavy CRUD syncing of data from 2 of our clients large systems in to the Episerver Commerce catalog structure.

We chose to use Scheduled jobs as we could use the API and we needed to do a few other things, we knew the jobs would be intensive but when we came to test some of this on the DXC we kept getting an issue with our largest fresh import of data.

The jobs were getting aborted, so we contacted Episerver support and were informed that Azure Auto Heal was turned on https://blogs.msdn.microsoft.com/appserviceteam/2017/08/17/proactive-auto-heal/ for the environments. Auto Heal will work out if instances have an issue and restart those instances however one of the thresholds it checks was memory usuage and for us the large fresh import was hitting this issue with memory due to the massive data set we were working with.

There are 2 options for this and around architecturing jobs that will cause heavy processing.

To note Episerver Support told me if you have a commerce site they can create another web app for this at no extra cost.

SO As a thought I'd suggest whenever using Scheduled jobs to consider how much data you are processing and how intensive they are and to consider separating jobs out as standard on the DXC to help mitigate any of these issues.

Nov 28, 2018

Comments

Please login to comment.
Latest blogs
Make Global Assets Site- and Language-Aware at Indexing Time

I had a support case the other day with a question around search on global assets on a multisite. This is the result of that investigation. This co...

dada | Jun 26, 2025

The remote server returned an error: (400) Bad Request – when configuring Azure Storage for an older Optimizely CMS site

How to fix a strange issue that occurred when I moved editor-uploaded files for some old Optimizely CMS 11 solutions to Azure Storage.

Tomas Hensrud Gulla | Jun 26, 2025 |

Enable Opal AI for your Optimizely products

Learn how to enable Opal AI, and meet your infinite workforce.

Tomas Hensrud Gulla | Jun 25, 2025 |

Deploying to Optimizely Frontend Hosting: A Practical Guide

Optimizely Frontend Hosting is a cloud-based solution for deploying headless frontend applications - currently supporting only Next.js projects. It...

Szymon Uryga | Jun 25, 2025

World on Opti ID

We're excited to announce that world.optimizely.com is now integrated with Opti ID! What does this mean for you? New Users:  You can now log in wit...

Patrick Lam | Jun 22, 2025

Avoid Scandinavian Letters in File Names in Optimizely CMS

Discover how Scandinavian letters in file names can break media in Optimizely CMS—and learn a simple code fix to automatically sanitize uploads for...

Henning Sjørbotten | Jun 19, 2025 |