DogMan
  • DogMan
  • 100% (Exalted)
  • Newbie Topic Starter
2 years ago
Server 2022  - two servers on same LAN with 20GB connectivity - 100GB staging on each - still seeing this: The DFS Replication version vector size has exceeded acceptable limits.  A large version vector size could cause degraded DFS replication performance, poor responsiveness of DFS replication management operations,  and excessive memory and CPU resource consumption. Contact Customer Support Services to analyze the overall health of your DFS Replication deployment. 

Log Name:      DFS Replication
Source:        DFSR
Date:          12/3/2022 1:17:02 AM
Event ID:      1315
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
Computer:      server 1
Description:
The DFS Replication version vector size has exceeded acceptable limits.  A large version vector size could cause degraded DFS replication performance, poor responsiveness of DFS replication management operations,  and excessive memory and CPU resource consumption. Contact Customer Support Services to analyze the overall health of your DFS Replication deployment. 


 
Recycling Computers  is my hobby so if you have any old PCs or Macs..
Sponsor

Want to thank us? Use: Patreon or PayPal or Bitcoins: bc1q4whppe29dw77rm4kv4pln0gqae4yjnxly0dny0hky6yhnafukzjsyrsqhk

All opinions expressed within these pages are sent in by members of the public or by our staff in their spare time, and as such do not represent any opinion held by sircles.net Ltd or their partners.


herbet
2 years ago
You need a serious staging file size for that, especially if you have put all of the folders in one replication group. I am saying 128Gb might be enough but maybe 256Gb.
sirclesadmin
a year ago
Try a larger staging file size and make sure you await a complete replication. The biggest issue we see is people restarting the services because it hasn't replicated after a day or so. Sometimes it can take a while.
drdread
  • drdread
  • 100% (Exalted)
  • Advanced Member
18 days ago

When you see the error "The DFS Replication version vector size has exceeded acceptable limits", it indicates that the DFS Replication (DFSR) database has grown too large, often due to an excessive number of changes or backlogged updates.


Causes:



  1. Too Many Files/Changes – Large numbers of changes in a short period can cause replication metadata to grow beyond acceptable limits.

  2. Lingering Backlog – If replication has been interrupted for a long time, the backlog of changes can make the database too large.

  3. Conflicts and Deleted Files – Accumulated deleted or conflicted files increase metadata storage requirements.

  4. Insufficient Database Cleanup – DFSR doesn't automatically shrink the database even after large changes are removed.




Solutions:


Option 1: Check Replication Backlog



  • Run the following command to check for any backlogged files:
    Get-DfsrBacklog -GroupName "ReplicationGroupName" -SourceComputerName "SourceServer" -DestinationComputerName "DestinationServer"

    If there’s a large backlog, it may be necessary to force a resynchronization.




Option 2: Verify Database Size



  • Check the DFSR Database folder size on affected servers:
    C:\System Volume Information\DFSR

    If the database is too large, you might need to reset DFSR replication.




Option 3: Perform a Non-Authoritative Restore


If the replication issue is severe and you need to resync data, you can reset DFSR replication using a non-authoritative restore:



  1. Stop the DFSR service:
    net stop dfsr


  2. Delete the database on the affected server:
    C:\System Volume Information\DFSR

    (You'll need admin privileges to access this folder.)

  3. Reset the replication state in the registry:

    • Open regedit and navigate to:
      HKLM\System\CurrentControlSet\Services\DFSR\Parameters\Replication Groups\{GUID}


    • Delete the ReplicationState entry.



  4. Restart DFSR service:
    net start dfsr


  5. Trigger an initial sync:
    dfsrdiag pollad

    This will reinitialize replication and fetch data from other servers.




Option 4: Check and Reduce Staging Quota


If the issue is caused by excessive data changes, reducing the DFSR Staging Quota might help:



  1. Open DFS Management Console (dfsmgmt.msc).

  2. Go to Replication Groups > Select the affected group.

  3. Click on Memberships > Right-click the problematic folder > Properties.

  4. Adjust the staging quota to a lower value (default is 4GB, reduce it if needed).




Option 5: Check and Remove Conflict or Deleted Files


Conflicts and deleted files take up metadata space. You can manually clear them:



  1. Navigate to the ConflictAndDeleted folder (hidden system folder).

  2. Delete unnecessary files.


To check the size of Conflict and Deleted Files, run:


dfsradmin membership list /attr:conflictanddeletedquota

To increase or reduce the quota:


dfsradmin membership set /rgname:"ReplicationGroupName" /rfname:"FolderName" /memname:"ServerName" /conflictanddeletedquota:1024

(Replace 1024 with the desired quota in MB.)




Prevention Tips:


Regularly monitor DFS Replication health using:


Get-DfsrState

Keep an eye on backlog with:


Get-DfsrBacklog

Use a dedicated replication server instead of running DFSR on overburdened servers.
Avoid frequent bulk changes to replicated files.




Let me know if you need help with specific steps! 🚀