Quantcast
Channel: Mart's Sitecore Art
Viewing all 81 articles
Browse latest View live

Sitecore xDB Resharding Tool - Unlock xDB Storage and Performance Limitations by Increasing Collection Shards Without Data Loss

$
0
0

Background

There is currently no way for you to increase the number of xDB collection database shards for an existing deployment, without starting from scratch and losing all your data.

The release of my colleague Vitaly Taleyko's Sitecore xDB Resharding Tool solves this problem for all of us, as it provides us with a migration path from an old shard collection to a new one.

https://github.com/pblrok/Sitecore.XDB.ReshardingTool


The inability to increase shards after deployment is a major problem for enterprise customers using the platform, who may not be aware of how quickly the collection databases will grow over time.

If you are a Sitecore veteran, you have experienced this rapid collection growth in MongoDB. As soon as the platform is "turned on", it starts collecting interactions and events for both anonymous and identified contacts.

Putting a Sitecore environment into Production, means opening up the flood gates for a massive amount of data that you don't have much control over.

xDB Search Index

On the xDB index side of the house, Sitecore has filtered the interactions and contact data in the xDB index to identified contacts only (by default). This is good for simple customers who aren't doing much with the platform.

However, if you have millions of contacts, you will have the same index problems as you may have faced when dealing with anonymous contact data in previous versions mentioned in this blog post from a while ago. There is a solve for this, that I may touch on in a later post.

The 2 Shard SQL Problem

Out of the box, if you install the Sitecore platform using default scripts and SIF or if Sitecore Managed Cloud has set up your environment in Azure, you will have 2 SQL shard collection databases.

The value proposition of Sitecore xDB is the ability to store any events tied to an interaction from any channel in xDB. They have provided a robust set of APIs to allow this.

The problem is that storing hundreds of gigabytes or even terabytes of data requires a very carefully planned strategy, or else, you will fail. It is just a matter of when.

I always have the following scene from Evan Almighty in my head when I talk about this problem:




The bottom line - if you have an enterprise deployment, and are using xDB, the out of the box 2 shard collection database configuration is not enough!

New Deployments

If you are new to the Sitecore platform, you can fix this by using the Shard Map Manager Tool to increase the number of shards. This great post by Kelly Rusk explains how: http://thebitsthatbyte.com/what-is-and-how-to-use-the-sitecore-9-shard-map-manager-tool

Existing / Live Deployments

Bad news if you have an existing deployment.

You will hit bottlenecks as you store more contact, interaction and event data. With limited CPU, storage capacity and memory, database performance will start to suffer and query performance and routine maintenance will slow down.

When it comes to adding resources to support database operations, vertical scaling (aka scaling up which is very easy to do on Azure) has its own set of limits and eventually reaches a point of diminishing returns.

The Negative Ripple Effect on xDB

I have seen cases where due to the massive amount of data stored in the xDB collection shards over time, xConnect Search will fail to keep the xDB index in sync, and xConnect search will stop working.

After this happens, the only option is to rebuild your xDB index, but because of the poor xDB collection database performance, it will take days, if not weeks if you are lucky.

Or, it will simply keep failing.

How Increasing Shards Helps

Adding additional collection shards to xDB means additional SQL compute capacity to serve incoming queries in the distributed configuration, and thus faster query response times and index builds.

Additional shards will increase total cluster storage capacity, speed up processing, and offer higher availability at a much lower cost than vertical scaling.

How the xDB Resharding Tool Helps

As I started working with Vitaly to architect this tool, our first idea was to use the Data Exchange Framework that is used to power the xDB Migration Tool. We had used a customized version of this tool when we migrated from our Sitecore 8.2 deployment to our current 9.1.

We decided to pivot, because we wanted a lightweight tool that could run on any Windows-based machine, and would run directly against SQL and as a result, be much more efficient!

The Beauty of the Tool Part 1: Migrating Your Data

This tool allows you to reshard your Sitecore xDB Collection Databases without any data loss.

What does this mean exactly?!

This tool allows you to migrate your current SQL xDB analytics collection database shards to a new set of xDB analytics collection database shards without losing any of your data.

So, for example if you have 2 shard databases, and want to move up to 4 shard databases, this tool will allow you to migrate over.

For this example, you would set up 4 new shards using SIF (as shown in Vitaly's GitHub readme doc), or use the Shard Map Manager Tool, and then point the tool at your old shards and the new shards and voila! Your data will get migrated over!

The Beauty of the Tool Part 2: Resume Mode

Another fantastic feature that Vitaly added to this tool is "resume mode". If there is a glitch in the migration process, or you need to stop it manually for some reason and resume later, it will remember where it left off and pick the migration right back up!

Battle-Tested and Ready For Download!

This tool has been tested, and I can say that it works, and works well!

You can check the full source code out on Vitaly's GitHub: https://github.com/pblrok/Sitecore.XDB.ReshardingTool

You can download your copy today using this link: https://github.com/pblrok/Sitecore.XDB.ReshardingTool/raw/master/ToolReleases/win-x64.zip



Huge xDB and xConnect Improvements in Sitecore 9.3

$
0
0
With the release of Sitecore 9.3, there is a lot of hype in the community around the new bells and whistles that the updated version of the platform offers.

What I am most excited about are the huge performance and stability improvements in xConnect and xDB, and in this post, I will highlight those key areas.



Search Indexer Rebuild Enhancements

Performance

Sitecore made some very important improvements to the stability of rebuilds in systems under stress scenarios, allowing for faster rebuilds:

  • Implemented batching and parallelization during the rebuild synchronization stage
  • Reduced the default SplitRecordsThreshold from 25,000 to 1,000
  • Reduced the ParallelizationDegree of the rebuild to half
  • The sync stage of rebuild can now finish if there is an indexing cycle where less than 25k of changes are detected.


New Commands

  • Check status of rebuild: -rebuildmonitor / -rm
    • Before you could just check logs for the completion entires
  • Delete obsolete data from index: -cleanuprebuildindex / -cri


Rebuild Resume

If you have a lot of interaction data, rebuilds can take a very long time!

If you have a hiccup during the process, there is finally an option to resume without having to start from scratch! A major feature to say the least.


Sharding Deployment Tool Enhancements

With the 9.3 Database Deployment Tool, we finally have the ability to add and remove shards to the xDB collection store. Adding is so important, as I mentioned in my previous post.

To summarize, additional collection shards means additional SQL compute capacity to serve incoming queries in the distributed configuration, and thus faster query response times and index builds. Additional shards will increase total cluster storage capacity, speed up processing, and offer higher availability at a much lower cost than vertical scaling.

You will be required to run an upgrade tool that will modify your pre 9.3 collection databases to allow for the ability to make these adds and removes to your shards.

Web Tracker Stability

The following improvements have been made to the Web Tracker's stability and performance:

  • Session expiration batching
  • Optimization of the Web Tracker and xConnect communication
  • Optimization of the Web Tracker and Reference Data Service communication


Reference Data Layer Redesign 

There have been a lot of issues in previous versions of 9.x that required patches and SQL database scripts to resolve. See my post on Critical Patches for 9.1.

In 9.3, this Reference Data layer has been completely redesigned from the ground up, with focus the main focus being on performance.

New TVP Provider

Sitecore uses table-valued parameters (TVP) to send aggregated information to SQL, because this is a very fast way to communicate with SQL server. Basically, you can use table-valued parameters to send multiple rows of data to a function or stored procedure without the overhead of creating a temporary table or using tons of parameters.

Sitecore has added a new TVP provider in 9.3, that provides major performance improvements.

Ability to Turn On and Off Performance Counters in Azure

If you are deploying to PaaS, performance counters are now disabled by default just like CMS counters.

With 9.3, you have the ability to switch performance counters off and on for xConnect, xDB and Cortex services.


Sitecore xDB - Troubleshooting Contacts Moving Slowly Through Marketing Automation Plans

$
0
0

Background

We ran into an issue on our Sitecore 9.1 Azure PaaS deployment, where contacts were moving extremely slowly through the elements of several of our Marketing Automation plans. The contacts were being enrolled correctly, but when they got to a certain step, the could be stuck for several days.

Our instance includes several high traffic sites, but our plans were not complicated at all.  For example, one was set up like this:

  • Visitor updates their profile, and after clicking submit, a goal is triggered that enrolls the contact into the plan.
  • There is a 1 minute delay.
  • An email is sent.

In this post, I will describe the approach I took to getting to the bottom of this problem.

Getting More Information From Marketing Automation Logs

This first step in any troubleshooting process is to get as much detail from your logs as possible so that you can better understand what could be going wrong. 

One of the key components of Sitecore Marketing Automation, is the task that is responsible for the processing of contacts in the Automation Plans. In Azure, this is the Marketing Automation Operations App Service (ma-ops) WebJob, or the Sitecore Marketing Automation Engine service on a Windows Server. What you need to do is set the logging level to “Information” in the sc.Serilog.xml file. 

The full path of the file location is the following (Azure):
wwwroot/App_Data/jobs/continuous/AutomationEngine/App_Data/Config/sitecore/CoreServices/sc.Serilog.xml

For more information, check the following article: https://kb.sitecore.net/articles/017744

After getting this in place, I was able to rule out the possibility of exceptions being thrown that could have been causing the problem.

Finding the clogged Automation Pool

After digging in, I discovered that the number of contacts in the UI corresponded to the data in the Marketing Automation (ma-db) database’s AutomationPool table.  

These are the other things I discovered:
  • Running a “row count” query against the AutomationPool table determined that there were 100s of millions of records.
  • Checking Azure resource analytics showed large database utilization spikes (100% DTU) that corresponded to high CPU on my ma-ops service (averaging above 70%).
  • Running a profile on the maengine.exe job revealed that getting and processing the data from the SQL server seemed to be the bottleneck.

Fixing the clogged Automation Pool

Database Health

One key assumption here was that my ma-db was healthy, and that its indexes were not seriously fragmented.

In previous posts, I have spoken about how important it is to make sure that regular maintenance is performed on your databases. Same thing applies here. Make sure that you run the AzureSQLMaintenance Stored Procedure on your ma-db regularly.

Increase Pricing Tier If Possible

The analysis determined that both my ma-ops app service and ma-db database were struggling to keep up with the processing load, and so I increased the pricing tier on both to give them more horsepower.

Increase Low Priority Worker Batch Size

The AutomationPool table showed me that almost all of the items had a priority of 80, and would be processed by the LowPriorityWorker. I opted to increase the batch size of this worker, so that it would process more items in each batch.  My worker was initially set to 200, and so I tippled it to 600.

The config file location can be found here: App_Data/jobs/continuous/AutomationEngine/App_Data/Config/sitecore/MarketingAutomation/sc.MarketingAutomation.Workers.xml

<LowPriorityWorkerOptions>
<!-- How long the worker sleeps when there is no work available -->
<Schedule>00:00:20</Schedule>
<!-- The minimum priority of work item to process. -->
<MinimumPriority>0</MinimumPriority>
<!-- The timeout period to use for work items. -->
<WorkItemTimeout>00:00:45</WorkItemTimeout>
<!-- The period after which the work item timeout is set again. -->
<WorkItemTimeoutSchedule>00:00:30</WorkItemTimeoutSchedule>
<!-- The batch size multiplier to use when checking out work items from the pool. -->
<BatchHead>4</BatchHead>
<!-- The batch size to use when checking items out from the pool. -->
<BatchSize>150</BatchSize>
</LowPriorityWorkerOptions>

Clog Removed

After making these adjustments, I kept a close eye on the number of rows in the AutomationPool table.

I am happy to report that they were decreasing at a pleasing rate, and that the data started to appear in the reports in the UI.


Sitecore xDB - Solr xDB Index Troubleshooting Postman Collection

$
0
0

Background

Over time, I have saved a series of Solr queries that I have found to be extremely useful in understanding, troubleshooting and maintaining xDB Solr indexes.  Some are especially useful when performing an xDB index rebuild.

Getting Started

After downloading the Sitecore xDB.postman_collection.json file, you can install it in your Postman application by clicking the "Import" button located in the top-left corner of the app and selecting the file downloaded on your computer.

After doing this, you will see the Sitecore xDB collection appear in your Postman app:



The core / index name, and Solr url are variables, so the next thing you want to do is set the values for your configuration.

Click on the Sitecore xDB collection, and then the more actions option (3 dots) will appear. Click on Edit, and then navigate to Variables.  




Set the solr_url variable to your Solr instance url, and the xdb_index to match the index you want to work with.  

There are two xConnect Solr cores: the live core (usually xdb) and the rebuild core, which has a _rebuild suffix (like xdb_rebuild). So, you can set this to either value, depending on the core you want to work with.


That's it! You should be ready to test.

Working with the collection

The collection has the following queries available to help with your troubleshooting or maintenance.

xDB Contacts Count

This will return the number of contacts in your index (shown in the numberFound value that is returned). 

When you start an index rebuild, depending on the core you have configured, you will see this number initially be 0, and then gradually start to increase as contacts get added to the core from your shard databases. 

Obviously, this is also useful to see how many total contacts you have in your index over time.



xDB Interactions Count

This returns the number of interactions in your index (shown in numberFound value that is returned). 

xDB Docs Count

This returns the total number of documents in your xDB core. This is a combination of contacts and interactions.

xDB Sync Token

This returns the sync token which is a key component of the Sitecore xConnect / xDB optimistic concurrency model. See more information about it here: https://sitecore.stackexchange.com/questions/12453/xconnect-indexworker-error-tokens-are-incompatible-they-have-different-set-of#answer-12454

xDB Rebuild Status

As described in one of my previous posts, it will return the status of your index, which is especially useful if you are performing an index rebuild.

Default = 0
RebuildRequested = 1
Starting = 2
RebuildingExistingData = 3
RebuildingIncomingChanges = 4
Finishing = 5
Finished = 6

xDB Schema Modifications

When setting up xDB cores for the first time, you are required to make specific schema modifications to both your xDB and xDB rebuild cores using the Solr Schema API. See the Sitecore docs site for more information: https://doc.sitecore.com/developers/90/platform-administration-and-architecture/en/walkthrough--using-solrcloud-for-xconnect-search.html

This post request contains the JSON schema modifications that need to be applied to each core.  The JSON that is used in the Body of this request can be found in the schema.json file located in the \App_Data\solrcommands folder of your xConnect instance.





xDB Delete Interactions

Do not use this in your Production environment! Like the name implies, it will remove interactions from your index. This is useful for local dev environments, if you need to clean things up in your index. This can also we used as an example for working in with your Solr cores via the API.

Sitecore xDB - Right To Be Forgotten Breaking EXM Sends

$
0
0

Background

Our email manager faced an Email Experience Manager (EXM) issue where when starting or resuming an email campaign, it will start initializing and then it will go directly to a paused state.

Errors

The logs of our Content Management Server revealed the following errors: 

Cause

After digging in, I discovered that this was caused by contacts in our lists that where missing an Alias identifier. This is a dependency for EXM in the way it pulls in the necessary contact data during dispatch. 

What had happened was that our email manager had used the Anonymize feature in the Experience Profile dashboard that executes the right to be forgotten xConnect API under the covers. This process removes the Alias identifier from the contact record in our key xDB shard database tables. 



Checking the xDB Database

If you have SQL experience, you will be pleasantly surprised to find that the xDB databases are not that complicated to work with it all.

Executing the following SQL query helped me find the contacts that where missing contact identifiers:


If you compare what a normal contact looks like vs an anonymized contact, you can see that the records have been wiped from the key ContactIdentifiers and ContactIdentifiersIndex tables.

This is the SQL statement that I used:


Normal Contact


Anonymized Contact



Another important point to note is that when a contact has been anonymized, a new ConsentInformation facet as added to the contact.

 


The Fix

I simply wrote a SQL statement that would insert new Alias identifier records in the ContactIdentifiers and ContactIdentifiersIndex tables for the contacts in my lists that had been anonymized.


Running this SQL statement fixed the missing dependency that EXM required for its email dispatch, and our email campaigns starting sending again.



Sitecore xDB - Extending xConnect To Reduce xDB Growth

$
0
0

Background

As I mentioned in a previous post,  putting a Sitecore environment into Production with xDB enabled means opening up the flood gates for a massive amount of data flowing into the platform as it starts collecting interactions and events for both anonymous and identified contacts.

xDB Storage Smarts

Just because you can store everything in xDB, it doesn't necessary mean that you should store everything. 

xDB is a marketing database. As a rule of thumb, you should only store data in xDB if it's required for:
  • Personalization
  • Targeting
  • Measurement / Analytics

It is important to plan for the data requirements early, and again, only store the data that you really need to!

Remove xDB Analytics Data

In version 9.2 and later, xConnect allows you to delete contacts and all associated data in xDB if you choose to. So, you have the ability to remove the data that isn't useful to you anymore. https://doc.sitecore.com/developers/93/sitecore-experience-platform/en/deleting-contacts-and-interactions-from-the-xdb.html

But, you need to do this by writing code, and so you would need to work with your Sitecore architect and developers to carefully plan this out for your implementation. And again, this is only an option for the newer versions of the platform.

Extend xConnect To Filter xDB Analytics Data

One good option that I have implemented is to prevent analytics data that is not valuable to your business from getting into xDB in the first place. xConnect allows you to extend the XConnectDataAdapterProvider, and this allows you to hook into the sweet spot where the session data is being written to the shard databases for storage.



For my implementation, I extended the data adapter provider and added configuration to only goals and other events that I explicitly specify, to be recorded in my collection database.

I hope that this helps with your implementation!

Sitecore xDB - Extending xConnect To Reduce xDB Index Sizes And Rebuild Times

$
0
0

Background

In my previous post, I write about how you can extend the xConnect Data Adapter Provider to prevent unwanted data from being written to your shard databases.

But, what if you already have a lot of analytics and contact data in your shards, have an xDB index issue, and are struggling to rebuild your index?  

Or, what if you want to keep the data in your databases just in case you may need it some time in the future but want to lighten the load on your xDB index to improve performance?

In this post, I will show you how to filter data from your xDB index to reduce the index size and growth rate. This will also improve your rebuild times tremendously.

You can also skip ahead and check out the full source code for my example on my GitHub repo: https://github.com/martinrayenglish/Test.SCExtensions


Business Scenario

In a 9.1 project, I was faced with a very slow rebuild times, where it was literally taking weeks to rebuild our xDB index. I was very concerned with the amount of data we were ingesting into the xDB index from the shard databases, specifically the rate at which interactions where being added. I have an older post this talks about this topic, so give it a read if you have not already.

Prior to 9.2, there is no true supported way to purge old interaction data, and so I was faced with the question of how I could reduce the amount of interaction data that our xDB index contained, and preventing our business from being paralyzed for weeks waiting for a rebuild to complete in the future.

Solving The Problem

As sheer quantity of interactions was the problem, I wanted to be able to filter both new and existing interactions from being brought in from the shard databases during a rebuild or after the session end data aggregation process.

Round 1

In my initial exploration, I focused on customizing the index writer code as that seemed to be the sweet spot.  The code below shows an example of where I customized it to remove ALL interactions from making their way to the xDB index (see line 28 where I simply new up a new blank list of InteractionDataRecord objects):

To apply the updated writer, I put the new assembly into
{xconnect}\bin
and
{xconnect}\App_data\jobs\continuous\IndexWorker

 and then updated the XML files to use new writer in {xconnect}\App_data\jobs\continuous\IndexWorker\App_data\Config\Sitecore\SearchIndexer\sc.Xdb.Collection.IndexWriter.Solr.xml
and
{xconnect}\App_data\config\sitecore\SearchIndexer\sc.Xdb.Collection.IndexWriter.Solr.xml

<IIndexWriter>  
<Type>Test.SCExtensions.Xdb.Collection.Search.Solr, Test.SCExtensions</Type>
<As>Sitecore.Xdb.Collection.Indexing.IIndexWriter, Sitecore.Xdb.Collection</As>
<LifeTime>Singleton</LifeTime>
</IIndexWriter>

This worked well for new interactions coming in from the shard databases after session end, but did not work for rebuild operations so I needed to explore further.

Round 2

The problem that I discovered after my initial try, was that the SolrWriter type specified above is not used by the rebuilder because of the dependency injection implementation details. so writing the custom implementation of the rebuilder class was not a valid option.

I started looking into other options to override the related logic.

After investigating further, I discovered that a custom implementation of both the IndexRebuilderFilter and IndexWriterFilter would take care of both new interactions from sessions and existing interactions coming in via a rebuild operation.  

I created 2 new classes to perform these tasks that I used to decorate the related interface. 

Again, in these examples, I am removing ALL interactions from my xDB index. In a real world scenario, it would be best to filter based on analytics data that is valuable to your business so that it will be available via the xDB search API. 

I show you how to do this further down in my post: More Bang - Selective xDB Index Filtering.

Next, I created decorator XML configs to allow xConnect to use my new writer extension code.

Rebuilder writer config location: {xconnect}\App_Data\jobs\continuous\IndexWorker\App_data\config\sitecore\SearchIndexer\sc.Xdb.Collection.Indexing.IndexRebuilderFilterDecorator.xml

<?xml version="1.0" encoding="utf-8"?>  
<Settings>
<Sitecore>
<XConnect>
<SearchIndexer>
<Services>
<IIndexRebuilderFilterDecorator>
<Type>Sitecore.XConnect.Configuration.ServiceDecorator`2[[Sitecore.Xdb.Collection.Indexing.IIndexRebuilder, Sitecore.Xdb.Collection], [Test.SCExtensions.Xdb.Collection.Search.Solr.IndexRebuilderFilterDecorator, Test.SCExtensions]], Sitecore.XConnect.Configuration</Type>
<LifeTime>Singleton</LifeTime>
</IIndexRebuilderFilterDecorator>
</Services>
</SearchIndexer>
</XConnect>
</Sitecore>
</Settings>
Writer config location: {xconnect}\App_Data\jobs\continuous\IndexWorker\App_data\config\sitecore\SearchIndexer\sc.Xdb.Collection.Indexing.IndexWriterFilterDecorator.xml
<?xml version="1.0" encoding="utf-8"?>  
<Settings>
<Sitecore>
<XConnect>
<SearchIndexer>
<Services>
<IIndexWriterFilterDecorator>
<Type>Sitecore.XConnect.Configuration.ServiceDecorator`2[[Sitecore.Xdb.Collection.Indexing.IIndexWriter, Sitecore.Xdb.Collection], [Test.SCExtensions.Xdb.Collection.Search.Solr.IndexWriterFilterDecorator, Test.SCExtensions]], Sitecore.XConnect.Configuration</Type>
<LifeTime>Singleton</LifeTime>
</IIndexWriterFilterDecorator>
</Services>
</SearchIndexer>
</XConnect>
</Sitecore>
</Settings>

After applying these updates, I was able to successful filter interactions coming from the shard databases to my xDB index during both rebuilds and session ends!

More Bang - Selective xDB Index Filtering

As you don't necessarily want to remove all interactions from the xDB index unless you are in dire straights, I added a new class for filtering interactions based on specific events. In other words, if an interaction has an event in it that we care about, we allow it to go into the index.


By using this example, you can lighten your index, and still include interactions that are important to you for personalization etc. 

This full source code for this example can be found on my GitHub repo: https://github.com/martinrayenglish/Test.SCExtensions

Sitecore Azure PaaS - Fixing "No owin.Environment item was found in the context" When Application Initialization Configured

$
0
0

Background

On Azure PaaS, it is well known that Application Initialization can assist in “warming up” your Sitecore application when scaling up or swapping slots, so that your App Services instances aren't thrown into rotation to receive traffic when they aren't ready. 

Unfortunately, since Sitecore 9.1, there is a nasty Owin error that prevents this from working correctly and this in turn impacts the healthy signaling of your App Services. 

Strangely, the error would disappear after about 2 minutes.


Sitecore Support Response

After talking with Sitecore Support engineers, they ended up telling us that using <applicationInitialization> had not been tested for Sitecore, and that they don't have any official guidelines on configuring Azure App Initialization.

Finally, they said that they were not able to provide us with any assistance.

Cause

I started deciphering the problem on my own. 

Looking at this error, the key to understand its root was on line 6: the BeginDiagnostics processor.

1:  [InvalidOperationException: No owin.Environment item was found in the context.]  
2: System.Web.HttpContextExtensions.GetOwinContext(HttpContext context) +100
3: Sitecore.Owin.Authentication.Publishing.PreviewManager.get_User() +15
4: Sitecore.Owin.Authentication.Publishing.PreviewManager.GetShellUser() +23
5: Sitecore.Pipelines.HttpRequest.BeginDiagnostics.ResolveUser() +41
6: Sitecore.Pipelines.HttpRequest.BeginDiagnostics.Process(HttpRequestArgs args) +43
7: (Object , Object ) +15

The BeginDiagnostics processor in the httpRequestBegin pipeline is used for Sitecore's diagnostic tools. This determines what happens if you try to run in Debug Mode in the page editor.

In this case, the PreviewManager User property was trying to return the HttpContext.Current.User which was not available yet, and was returning null and the error “No owin. Environment item was found in the context.” was thrown.

The Fix

After determining what was causing this error, the fix was simple. The processor isn't needed on Content Delivery instances as Debug Mode is only used on the Content Management instances.

Removing the processor via a simple patch fixed the error:


Understanding Sitecore's Self-Adjusting Thread Pool Size Monitor

$
0
0

Background

In a previous post, I focused on the inner workings of the .NET CLR and Thread Pool and how they can impact the stability of the Sitecore application.

I must admit, I have become mildly obsessed with the threading over the last couple years, mostly because a great deal of my work has involved stabilization and optimization practices on high-traffic Sitecore sites. 

In this post, I want to focus in the Thread Pool Size Monitor that comes baked into Sitecore from 9 onwards, because it is not widely known that it exists, the job it does, and how it can be tuned to optimize performance.

Thread Pool Size Monitor

To recap, the most important thread configuration settings are the minWorkerThreads and minIOThreads where you can specify the minimum number of threads that are available to your application's Thread Pool instead of relying on the default formula's based on processor count which is always too few.

Threads that are controlled by these values can be created at a much faster rate (because they are spawned from the Thread Pool), than worker threads that are created from the CLR's default "thread-tuning" capabilities. 

To summarize: 

  • Thread pool threads get thrown in faster to handle work. 
  • The CLR thread spin up algorithm is too slow and we can't rely on it to support high performance applications.

As previously mentioned, in Sitecore 9 and above, there is a pipeline processor that allows the application to adjust thread limits dynamically based on real-time thread availability (using the Thread Pool API).

This can be found in the following namespace: Sitecore.Analytics.ThreadPoolSizeMonitor.

By default, every 500 milliseconds, the processor will keep adding a value of 50 to the minWorkerThreads setting via the Thread Pool API until it determines that the minimum number of threads is adequate based on available threads.

How It Works

Since a picture is worth a thousand words, I put together a diagram of how the logic of the Thread Pool Size Monitor logic works, and provided an example with the default settings that are set on an Azure P3v2 App Service that has 4 cores.  




Custom Thread Pool Size Monitor Configuration

An enhancement that I have made on my past 9.1 PaaS implementation was to tune Sitecore’s dynamic thread processor using a more “aggressive” configuration. This helped me with those “bursty” web traffic situations where I needed to be sure that I had enough threads available to serve the current demands. 

Here is the configuration that I used:

The Curious Case of Sitecore's Enforce Version Presence Permission Denied

$
0
0

Background

I ran into a perplexing enforce version presence & language fallback issue that I wanted to share so that others won't have to go down the same rabbit hole as I did to uncover the underlying issue.

Language, Fallback and Version Presence Config

Our site is configured for almost 3 dozen languages, and as you can image, we rely on language fallback a lot!  

Language fallback works by displaying a fallback version of a language item when there is no version available in the current language context. 

So if you got to https://www.mysite.com/de-de/mypage and there isn't a German version available for that page, it will instead render the English version (if one exists) at the url.

Now, let's say that you don't have an English version for that particular page either. So, there is only a Dutch version of the page available at https://www.mysite.com/nl-nl/mypage. Then you go to the German url https://www.mysite.com/de-de/mypage. Since there isn't any fallback English version available, you would expect to see a 404 response right?

To set this up, the item would need to have both "Enable Item Fallback", and "Enforce Version Presence" enabled at its item level. The recommended way to do this is by setting these values on the template Standard Values.


Finally, you would need to set fallback and version presence on your site configuration. If you are using SXA, you would enable so this under the /sitecore/content/TenantFolder/TenantName/SiteName/Settings/Site Grouping/SiteName item.

In "Other Properties", you set "enforceVersionPresence" set to true:

Problematic Results

Initially, this seemed to work as expected. 

Referencing the example again: We only had a single language version of the item, and nothing else (not even the English fallback). The Dutch version of the page was available at https://www.mysite.com/nl-nl/mypage and then accessing any other language url https://www.mysite.com/de-de/mypage or https://www.mysite.com/en/mypage we would see the expected 404.

Over time we discovered "Permission to the requested document was denied" issues started bubbling up all over the place for our items that were supposed to fallback. Initially, this seemed to be random: different users would access the same url, and some would experience the permission denied message, while the fallback and content would be resolved without issue for others.

Investigation

This was very tricky reproduce in a scaled environment.  In our investigation, we discovered the following scenario:

1. Create any page in a non-english version only.
2. Save and publish the page.
3. Access the page url in any other language version that does not exist for that item. 
    3.1 404 for that item is returned.
4. On the same server, try and access the page url in a different language version that does not exist for that item. 
    4.1. "Permission to the requested document was denied" is returned.
5. Clearing the AccessResult cache and first accessing the item in number 4 above that didn't previously work, would make it work. Trying to access the item in number 3 above that worked previously would then throw the permission denied error.

Perplexing right?

Let me provide a more specific example:

  • A Dutch version of the page was available at https://www.mysite.com/nl-nl/mypage 
  • Accessed the page via the German url https://www.mysite.com/de-de/mypage
    • The expected 404 for the item is returned.
  • On the same server, access the English page url via  https://www.mysite.com/en/mypage
    • "Permission to the requested document was denied" is returned.

Root Cause

The support team confirmed this critical bug in the Sitecore.Kernel that is present in all recent Sitecore versions.

The piece of code responsible for the permission denied error as:


Regardless of the value that is returned for the checks, the key/value combo is stored in AccessResultCache.

Next time the item is requested it is served from this cache, the permission denied error is thrown.

Fix Forthcoming

Due to the critically of this bug, the product team is working on the fix. Once ready, and it has gone through our own QA cycles, I intend to post an update with more information about it.

Control Tracking JavaScript Using Sitecore Rule-based Configuration

$
0
0

Background

Being able to control tracking JavaScript via Environment and Server role is a common problem that Sitecore developers are faced with. For example, your client doesn't want their Production Google Tag Manager or agency delivered tracking pixel scripts firing on any server / app instance other than their Production Content Delivery as it will spoil analytics.

Most of the time, developers will add some type of "if statement" code in Layouts or Renderings to help facilitate this, but this could be difficult to control and maintain based on the number of scripts you end up adding to you site(s). 

In addition, if you are using SXA and HTML snippets in your metadata partial design to house the scripts, this becomes even harder.

Post-Processing HTML

I wanted to focus on finding the sweet spot in Sitecore where I could inspect the entire HTML output after it had been glued together by the various pipelines, and then remove the target script from the HTML before it was transmitted to the browser.

Sitecore's httpRequestProcessed pipeline gives is the entry point, where we can leverage the MVC framework's result filter to manipulate the HTML.

I told my content authors to add a new attribute called "data-xp-off" to their scripts that I would use as the flag to determine if the script would be removed from the page.

For example:
<script data-xp-off>some tracking stuff</script>



Writing the Code

The first step was to create a new HttpRequest processer and associated configuration to inject into the httpRequestProcessed pipeline. Within this, I was able to access the HttpContext response filter object where I could perform the targeted script removal.

As you can see by the config, you can use whatever rule-based role config to apply the processor.

Next, I created a class based on the System.IO Stream class, where I overrode the Flush method. Within this new Flush method, I removed the script using a regular expression (based on the existence of the data-xp-off attribute within the html), and then wrote it to the response.

You will notice that I also included the "noscript" and "style" tags as an option for the filtering which was a bonus.

So you may ask me; "Martin, why did you not use the powers of the Html Agility Pack to perform your HTML manipulation?".  To be honest, that was my first approach. I wrote this code:

I discovered that the InnerHtml returned by the Agility Pack was making unintentional changes to my HTML markup, and that caused problems with client-side heavy components.  Digging into Sitecore's code, I discovered that they used the regular expression approach when injecting their Experience Editor "chrome" elements, and so I went down that path too.


Sitecore Publishing Service - Publishing Sub Items of Related Items

$
0
0

Background

We ran into an issue with our Sitecore 9.1 and Publishing Service 4.0 environment where when a page item with a rendering was being published, the corresponding data source of the rendering was not published fully.

To be more specific, if the rendering referred to a data source item that had multiple levels of items, then only the root of the data source was being published but not the child items.

A good example would be a navigation rendering that had a data source item with a lot of children. Content authors were making updates to all the link items within Experience Editor, but they were not being published.

This was happening for both manual publishing and publishing through workflow.


Configuration Updates

In our research, we discovered that publishing service allows you to specify the templates of the items you wish to publish as descendants of related items. 

Adding the following node to sc.publishing.relateditems.xml did trick (after a restart):

It is very important to note that the the template nodes need to have unique names in order for this to work. 

In other words: DatasourceTemplate1, DatasourceTemplate2, DatasourceTemplate3 etc. 

So as you can imagine, if you want to include a lot of data source item templates, the list in your configuration can get extremely large!

Final Words

I hope that this information helps developers who face a similar issue, as I could not find anything online about this related publishing configuration.

As always, feel free to comment or reach me on Slack or Twitter if you have any questions.                                                    

The Curious Case of Sitecore's Enforce Version Presence Permission Denied - The Fix

$
0
0

In a previous post, I shared a baffling enforce version presence & language fallback issue, that lead my team down a rabbit hole until we discovered that it was indeed a critical Sitecore.Kernel bug that impacted all versions, including the recent 10.0 version.

Since then, the Sitecore product team has provided a fix for this issue, which will be part of the upcoming 10.1 release.  

Until then, you can open a support ticket and reference bug #416301 and request help with your specific Sitecore version if you run into this problem.


How was it fixed?

As previously mentioned, the piece of code responsible for the permission denied error was within the GetAncestorAccess within the Sitecore.Security.AccessControl.ItemAuthorizationHelper class which is part of the Sitecore.Kernel. 

Within this method, regardless of the value that was being returned for the security checks, the key/value combo was stored in AccessResultCache and resulted in the permission denied error being thrown the next time the item was requested for a different language.

To correct this problem, a EnforceVersionPresenceDisabler "using statement wrapper" was added within the GetAccess method that is responsible for returning the "access allowed for an operation on an item".  See line 26 below. 

The switcher disables the Enforce Version Presence functionality, more specifically, it bypasses the functionality that enforces the relevant translated language version of the item to be be available for it to be returned from the API.

This was the key in corrected the access issue related to the extranet\anonymous user, and enforce version presence logic.

Sitecore PowerShell Extensions - Find Content Items Missing From Sitecore Indexes

$
0
0

Over the course of the last month, we ran into data inconsistencies between what was in our content databases compared to our Solr indexes.

We have content authors from around the globe and content creation happens around the clock by authors via the Experience Editor and imports via external sources.

Illegal Characters Causing Index Issues

As mentioned by this KB article https://kb.sitecore.net/articles/592127, index documents are submitted Solr in XML format, and if your content contains and “illegal” characters that cannot be converted to XML, all documents in the batch submission will fail.  

When you perform an index rebuild or re-index a portion of your tree, Sitecore will submit 100 documents in a batch to Solr. How is the related to the character issue? If you perform an index rebuild and have a single bad character in one of your items in the batch, none of the 100 docs in that batch will make it into your Solr index. 

What makes this especially difficult to troubleshoot is that item batches contain different items every time. So, what could be missing from your index during one rebuild, could be different during the next rebuild.  

There is a good Stack Exchange article that explains all of this, and kudos to Anders Laub who provides a pretty decent fix for this issue: https://sitecore.stackexchange.com/questions/18832/wildly-inconsistent-index-data-after-rebuilds 

PowerShell Index Item Check Script

There are several other reasons why content could be missing from your Sitecore Indexes, and so I needed to come up with a way to identify would could be missing.

PowerShell Extensions for the win!

I decided to create a PowerShell script to do just that - check for items in a selected target database that are missing in selected index, and produce a downloadable report.

What’s nice is that I strapped on an interactive dialog making it friendly for Authors or DevOps to make their comparison selections.



If you are newish to PowerShell Extensions, this could also help you understand how powerful it truly is, and serve as a guide to build your own scripts that you can use daily!

Sitecore Content Hub - Set up SAML-based SSO in Azure AD using an App Registration

$
0
0

Background

In this post, I will show you how to create and configure an Azure Application Registration in your tenant to allow Sitecore Content Hub users to successfully authenticate against your Azure Active Directory.

Options

The Content Hub team's preferred set up option is to create an Enterprise application within your Azure AD, but unfortunately for us, our DevOps would not allow this due to very strict security constraints that we had to abide by. This is the main reason that we had to go the App Registration route.

We initially tried to get the App Registration working using Microsoft Provider SSO, but could not get the proper Group claims working correctly.

As a result, we focused on configuring SAML Auth within our App Registration, and were able to get all the claims needed to successfully get SSO authentication working with this approach.

Set up within Azure

Within your Azure Portal, find App registrations and click on the New Registration button. Give it a name, and leave the default options selected, and click Register.


Within the newly created registration, go to the Authentication menu option within the Manage section.

Click "Add a platform", and then select "Web".


Set your Redirect URIs to be the Content Hub portal url. You will be able to add additional URIs after the initial set up. For now, I will use a default one.

Make sure you check the Access tokens and ID tokens boxes within the Implicit grant and hybrid flows section.


After this, click the Configure button.

Next, go to the Token configuration menu option within the Manage section.

Click Add group claim, and check the Security group box. Confirm that the Group ID radio option is selected within the ID, Access and SAML options.

Click the Add button.


Next, click Add optional claim.

Within Token type, select SAML, and check the email Claim box. Click Add.


When prompted, check the "Turn on the Microsoft Graph email permission" box to allow the claims to appear in the token. Click Add.


Next, go to the Expose an API menu option within the Manage section. Click Add a scope, and it will generate an Application ID URI for you. 

Make note of this, as you will need it for the Content Hub side.

Click Save and continue.


After is has been created, you can click the Cancel button.




Go to the Overview menu option, and click Endpoints. Go to the Federation metadata document XML url and make note of it

Then, copy and past it into a new browser tab.





Make note of the entityID.

Your set of notes should look similar to this:


Set up within Content Hub

Log into your Content Hub portal. Click on Manage, and then go to Settings.



Within Settings, go to PortalConfiguration, and select the Authentication menu option. Change the view to Text as it's easier to work with.

Within the ExternalAuthenticationProviders, saml XML config, set the key values to what you saved in your notes. Make sure you set the provider_name and add some basic messages.



Example:
 ExternalAuthenticationProviders": {  
"global_username_claim_type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name",
"global_email_claim_type": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress",
"google": [],
"Microsoft": [],
"saml": [
{
"metadata_location": "https://login.microsoftonline.com/8ac76c91-e7f1-41ff-a89c-3553b2da2c17/federationmetadata/2007-06/federationmetadata.xml",
"sp_entity_id": "api://c8696890-1d5f-479b-9df1-154e8f315165",
"idp_entity_id": "https://sts.windows.net/8ac76c91-e7f1-41ff-a89c-3553b2da2c17/",
"password": null,
"certificate": null,
"binding": "HttpRedirect",
"authn_request_protocol_binding": null,
"is_enabled": true,
"provider_name": "martinSamlNewLocal",
"messages": {
"signIn": "Martin SAML SSO Test"
},
"authentication_mode": "Passive"
}
],
"sitecore": [],
"ws_federation": [],
"yandex": []
}

Click Save, and you are done!

You are now ready to test out your authentication using your shiny, new authentication button.

Users with more than 200 groups

We found a limitation with SSO authentication group claims in Azure AD https://docs.microsoft.com/en-us/azure/active-directory/hybrid/how-to-connect-fed-group-claims wherein if there are more than 200 groups associated to a user, then the SSO authentication will provide a graph link instead of passing in the group claims. 

There is currently no solution for this problem. We are handling these handful of users via manual security set up.


Sitecore Publishing Service - Publishing Job Queue Report Using Sitecore PowerShell Extensions

$
0
0

Background

Sitecore's Publishing Service only allows you to see a maximum of 10 items at a time within the Queued or Recent jobs reports within the dashboard.

This is not ideal if you need to see how many total items are in the queue, need to get an estimate of how long it will take to get your publish live or quite simply need to do any type of analysis or troubleshooting.

Usually, you will have to talk to a DevOps person who has access to your Sitecore Master database, and get them to write a somewhat complex SQL query against your Publishing_JobQueue table to get you the information that you need.

It is a bit complex due to the fact that most of the key information is stored in an XML field called "Options" within this particular table.


PowerShell For The Win

After spending a bit of time formulating a decent SQL query that would get the key information that we were after, I decided to take it one step further by incorporating it into a PowerShell script that could be generated on demand from within the Sitecore console, and also output a searchable and downloadable report.

A clear win for our Authoring Admins and DevOps teams!

I hope you find this script a useful add to your PowerShell toolbox.



Sitecore Publishing Service - Using Sitecore PowerShell Extensions To Move Publishing Jobs To The Top Of The Queue

$
0
0

Background

In my previous post, I provided a way to get a job queue report using PowerShell Extensions (SPE). In this post, I am going to show how you can use the output from the report to promote publishing jobs to the top of the queue using SPE.

Large Publishing Queue

You may ask, well why? Sitecore's Publishing Service is a great improvement over the out-of-the-box publishing mechanism, and is pretty fast at publishing items.

That is indeed true, however when working with extremely large sites with several hundred content authors and multiple publishing targets, the queue can become extremely long. I have seen it grow to upwards of several thousands items, and publishing taking several hours.

This is problematic if you have something urgent that needs to be published, as the job could be sitting in the queue for hours!

The Solution

As you saw in my last post, it is pretty simple to access the SQL publishing queue database table using SPE. As the operations of a queue make it a first-in-first-out (FIFO) data structure based on the "Queued" datetime field,  I discovered that simply updating target job's datetime field to a smaller value, would instantly move the job higher in the queue.


So, my final logic was this:

  • Get smallest Queued datetime of the jobs sitting in the queue that still needed to be published
  • Subtract 2 minutes from the value
  • Update the queued datetime of job that I want to promote to the top of the queue with this new smaller datetime value
  • Done! My job was popped to the top!

Now, this is the perfect pairing with the job queue report from my previous post. You can use the report to find the job and its id that you want to promote, and then use the id to run the script to promote the job!

I recommend following this guide to convert this into your own SPE custom module for your solution: Modules - Sitecore PowerShell Extensions

 




I hope you find this script another useful add to your PowerShell toolbox.

Sitecore PowerShell Extensions - Find Content Items Missing From Sitecore Indexe

$
0
0

Over the course of the last month, we ran into data inconsistencies between what was in our content databases compared to our Solr indexes.

We have content authors from around the globe and content creation happens around the clock by authors via the Experience Editor and imports via external sources.

Illegal Characters Causing Index Issues

As mentioned by this KB article https://kb.sitecore.net/articles/592127, index documents are submitted Solr in XML format, and if your content contains and “illegal” characters that cannot be converted to XML, all documents in the batch submission will fail.  

When you perform an index rebuild or re-index a portion of your tree, Sitecore will submit 100 documents in a batch to Solr. How is the related to the character issue? If you perform an index rebuild and have a single bad character in one of your items in the batch, none of the 100 docs in that batch will make it into your Solr index. 

What makes this especially difficult to troubleshoot is that item batches contain different items every time. So, what could be missing from your index during one rebuild, could be different during the next rebuild.  

There is a good Stack Exchange article that explains all of this, and kudos to Anders Laub who provides a pretty decent fix for this issue: https://sitecore.stackexchange.com/questions/18832/wildly-inconsistent-index-data-after-rebuilds 

PowerShell Index Item Check Script

There are several other reasons why content could be missing from your Sitecore Indexes, and so I needed to come up with a way to identify would could be missing.

PowerShell Extensions for the win!

I decided to create a PowerShell script to do just that - check for items in a selected target database that are missing in selected index, and produce a downloadable report.

What’s nice is that I strapped on an interactive dialog making it friendly for Authors or DevOps to make their comparison selections.



If you are newish to PowerShell Extensions, this could also help you understand how powerful it truly is, and serve as a guide to build your own scripts that you can use daily!

Sitecore xDB - Troubleshooting xDB Index Rebuilds on Azure

$
0
0
In my previous post, I shared some important tips to help ensure that if you are faced with an xDB index rebuild, you can get it done successfully and as quickly as possible.

I mentioned a lot of things in the post, but now, I want to mention common reasons where and why things can go wrong, and highlight the most critical items that impact the rebuild speed and stability.


Causes of Need To Rebuild xDB Index

Your xDB relies on your shard database's SQL Server change tracking feature in order to ensure that it stays in sync. This basically determines how long changes are stored in SQL. As mentioned in Sitecore's docs, the Retention Period setting is set to 5 days for each collection shard. 

So, why would 5-day old data not be indexed in time?
  • The Search Indexer is shut down for too long
  • Live indexing is stuck for too long
  • Live indexing falls too far behind

Causes of Indexing Being Stuck or Falling Behind, and Rebuild Failures

High Resource Utilization: Collection Shards 
99% of the time, this is due to high resource utilization on your shard databases. Basically, if you see your shard databases hitting above 80% DTUs, you will run into this problem.

High Resource Utilization: Azure Search or Solr
If you have a lot of data, you need to scale your Azure Search Service or Solr instance.  Sharding is the answer, and I will touch in this further down.

What to check?

If you are on Azure, make sure your xConnect Search Indexer WebJob is running.
Most importantly, check your xConnect Search Indexer logs for SQL timeouts. 

On Azure, the Webjob logs are found in this location: D:\local\Temp\jobs\continuous\IndexWorker\{randomjobname}\App_data\Logs"

Key Ingredients For Rebuild Indexing Speed and Stability

SQL Collection Shards

Database Health 

Maintaining the database indexes and statistics is critically important. As I mentioned in my previous post:  "Optimize, optimize, optimize your shard databases!!!" 

If you are preparing for a rebuild, make sure that you run the AzureSQLMaintenance Stored Procedure on all of your shard databases.

Database Size

The amount of data and the number of collection shards is directly related to resource utilization and rebuild speed and stability. 

Unfortunately, there is no supported way to "reshard" your databases after the fact. We are hoping this will be a feature that is added to a future Sitecore release.

xDB Search Index

Similarly to the collection shards, the amount of data and the number of shards is directly related to resource utilization on both Azure Search and Solr. 

Specifically on Solr, you will see high JVM heap utilization.

If your rebuilds are slowing down or failing, or even if search performance on your xDB index is deteriorating, it's most likely due to the amount of data in your index, the number of shards and distribution amongst nodes that you have set up.  

Search index sharding strategies can be pretty complex, and I might touch on in these in a later post.

Reduce Your Indexer Batch Size

Another item that I mentioned in my previous post. If you drop this down from 1000 to 500 and you are still having trouble, reduce it even further. 

I have dropped the batch size to 250 on large databases to reduce the chance of timeouts (default is 30 seconds) when the indexer is reading contacts and interactions from the collection shards.


Understanding Sitecore's Self-Adjusting Thread Pool Size Monitor

$
0
0

Background

In a previous post, I focused on the inner workings of the .NET CLR and Thread Pool and how they can impact the stability of the Sitecore application.

I must admit, I have become mildly obsessed with the threading over the last couple years, mostly because a great deal of my work has involved stabilization and optimization practices on high-traffic Sitecore sites. 

In this post, I want to focus in the Thread Pool Size Monitor that comes baked into Sitecore from 9 onwards, because it is not widely known that it exists, the job it does, and how it can be tuned to optimize performance.

Thread Pool Size Monitor

To recap, the most important thread configuration settings are the minWorkerThreads and minIOThreads where you can specify the minimum number of threads that are available to your application's Thread Pool instead of relying on the default formula's based on processor count which is always too few.

Threads that are controlled by these values can be created at a much faster rate (because they are spawned from the Thread Pool), than worker threads that are created from the CLR's default "thread-tuning" capabilities. 

To summarize: 

  • Thread pool threads get thrown in faster to handle work. 
  • The CLR thread spin up algorithm is too slow and we can't rely on it to support high performance applications.

As previously mentioned, in Sitecore 9 and above, there is a pipeline processor that allows the application to adjust thread limits dynamically based on real-time thread availability (using the Thread Pool API).

This can be found in the following namespace: Sitecore.Analytics.ThreadPoolSizeMonitor.

By default, every 500 milliseconds, the processor will keep adding a value of 50 to the minWorkerThreads setting via the Thread Pool API until it determines that the minimum number of threads is adequate based on available threads.

How It Works

Since a picture is worth a thousand words, I put together a diagram of how the logic of the Thread Pool Size Monitor logic works, and provided an example with the default settings that are set on an Azure P3v2 App Service that has 4 cores.  




Custom Thread Pool Size Monitor Configuration

An enhancement that I have made on my past 9.1 PaaS implementation was to tune Sitecore’s dynamic thread processor using a more “aggressive” configuration. This helped me with those “bursty” web traffic situations where I needed to be sure that I had enough threads available to serve the current demands. 

Here is the configuration that I used:

Viewing all 81 articles
Browse latest View live