When things get out of control in a parallel universe, people notice and intervene. As turned out to be the case. One comment from Chad Rexin was so good that it deserved its own post. Welcome to the twisted SQL world where, like in Lobachevskian geometry, there is more than one degree of parallelism.
Another way to get good performance is to leave the Max Degree of Parallelism at 0 or set to a max of 4 if you have 4 or more CPUs/Cores, but also simultaneously setting the ‘Cost Threshold of Parallelism’ = 60 up from the default of 5, which is a reflection of what SQL estimates the query time in seconds will be and at which point it will generate a parallel plan. This way you can let SQL Server run with a maximum of 1 CPU in query plans for all but the more expensive query plans. This way SQL will only generate a query plan allowing for multiple CPUs if it thinks the query will take longer than 60 seconds and giving you the best of both worlds in case you have some Reporting databases or Analysis service databases that happen to be on the same SQL Server as the CRM databases so that both can run optimally. Even with servers that have many more than 4 cores, you may not see much of an advantage or may even see a drop in performance setting the max degree of parallelism higher than 4 from what I have seen in very busy Microsoft CRM environments where there is 1000 or more active users and/or active integrations going on.
Ultimately, you may need to run with a constant work load and test to determine which setting is optimal. Below is the T-SQL statements to make this change.
sp_configure ‘max degree of parallelism’,4 Reconfigure with Override; GO sp_configure ‘cost threshold for parallelism’,60 Reconfigure with Override; GO