SharePoint 2016 – Configure Outgoing email settings using Gmail

In order to configure SharePoint 2016 Outgoing email settings using Gmail, try to follow these steps:

Step 1: Install “SMTP Server” feature from Windows Server Manager


Note: Make sure to Include management tools

Step 2: Open IIS 6.0 Manager


Step 3: In General Tab, make sure to use 25 TCP Port


Step 4: In Access Tab, select Anonymous access


Step 5: In Access Tab, Add local server IP


Step 6: In Messages Tab, it’s optional to disable “Limit number of messages per connection to”


Step 7: In Delivery Tab, add Gmail account


Step 8: In Delivery Tab, add Gmail Port 587


Step 9: In Delivery Tab, add Gmail SMTP


Step 10: In General Tab, copy Fully-qualified domain name


Note: Restart Simple Mail Transfer Protocol (SMTP) service from  services.msc

Step 11: Open SharePoint Central Administration and add Fully-qualified domain name  in Outbound SMTP server as the below image



SharePoint – Unexpected error occurred in method GetObject, usage SPViewStateCache

In case of SharePoint web part depends heavily in Viewstate to store large data or request, you may face an issue in SharePoint that sometime data is not saved and if you check the SharePoint logs you will find the below error:

Unexpected Unexpected error occurred in method ‘GetObject‘ , usage ‘SPViewStateCache‘ – Exception ‘Microsoft.ApplicationServer.Caching.DataCacheException:


It seems SharePoint try to cache the request using distributed cache service and this service has default limitations (20 seconds).

To fix the above issue, try to increase the RequestTimeout as the below PowerShell script:

$ViewStateCache = Get-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache
$ViewStateCache.RequestTimeout = 1000
Set-SPDistributedCacheClientSetting -ContainerType DistributedViewStateCache -DistributedCacheClientSettings $ViewStateCache


قواعد بيانات DynamoDb على بيئة خدمات أمازون ويب AWS

تتضمن هذه الورقات مجموعة من المواضيع التي تتعلق بقواعد بيانات DynamoDbعلى بيئة خدمات أمازون ويب AWS التي تساعد على فهم هذه الخدمة ومعرفة ما يمكن عمله وتنفيذه بأفضل الطرق.

تم كتابة هذه الورقات بالتعاون مع الصديق اسماعيل عنجريني




General SharePoint Considerations in Disaster Recovery solutions

These are important points to consider with remote SharePoint disaster recovery solutions:

  • When adding new Site Collection in a database that is replicated to the Disaster Recovery farm, make sure to update the configuration database in Disaster recovery farm to register the new created site collection because it will not automatically be updated, you can updated using the following PowerShell:
    $db = Get-SPDatabase | where {$_.Name -eq “DatabaseName”}
  • In case of multi subnet failover cluster, consider to use MultiSubnetFailover property for all SharePoint databases to make the connection to different subnet more stable and to avoid connection timeouts issue.
    For more information

Cluster Quorum Models

The below table show Cluster Quorum Models which give SharePoint Administrator insight which options will be appropriated the option.

The cluster contains nodes and resources, to consider these resources in High Available situation only if the nodes are up and running. The cluster required more than half of the nodes to be up and running otherwise the cluster will go down. Quorum for the cluster maintain the number of nodes (also could be disk witness or file share witness) that must be online for the cluster to be run also to prevent scenarios when nodes can’t have communicated with each other which cause each node to try to own the resources at the same time. By default, every node in a failover cluster has a vote to determine whether the cluster continues running or not.

The value ‘0’ means the node doesn’t have a vote. The value ‘1’ means the node has a vote.

Each node in a WSFC cluster participates in periodic heartbeat communication to share the node’s health status with the other nodes.

Cluster Quorum has four models, only first three are recommended to use:

  • Node Majority: only Nodes can vote
  • Node and File Share Majority: Nodes and File Share witness can vote
  • Node and Disk Majority: Nodes and Disk witness can vote
  • No Majority: Disk Only: Only disk witness can vote and this model used in prior to Windows 2003 which it was only supported disk witness quorum.

To understand the usage for these models let us assume the following examples:

  1. If you have 4 nodes which it’s equal to 4 votes, then if 1 node fail then the 3 remaining nodes which is more than half of the cluster nodes will stay running.
  2. If you have 4 nodes which it’s equal to 4 votes, then if 2 nodes fail then the 2 remaining nodes will go down because it’s not more than half of cluster nodes.
  3. To increase the high availability for the cluster with 4 nodes then we can add File Share or disk witness which can have a vote, so in this case if we have 4 nodes + 1 File Share or Disk which it’s equal 5 votes then if 1 node fail then the 4 remaining nodes will stay running and if 2 fail then the 3 remaining nodes will stay running, so by adding the File Share or Disk witnesses we increase the availability with cheap means without a need to purchase a server (node).
  4. If you have 2 nodes and there is no File Share or disk, then if one node goes down then the cluster will go down.

It’s recommended to have odd number of votes which they equal to more than half by the quorum calculation (Minimum 2 Nodes + File Share or Disk witnesses).

Which model to select?

By default, Failover Cluster manager picks the best model based on cluster configuration and nodes. If there is odd number of Nodes, then the cluster will select (Node Majority) and if there is even number of Nodes and also has File Share then the cluster will select (Node and File Share Majority) and if it’s disk then it will be (Node and Disk Majority). If there is even number of Nodes and no disk or file share witness, then the mode will be (Node Majority) with warning messages.

If the Cluster use No Majority: Disk Only, then in this case the cluster has only 1 vote and if the disk goes down then the whole cluster will go down.