Quantcast
Channel: File Services and Storage forum
Viewing all 7579 articles
Browse latest View live

robocopy command - need to keep the original folder / file dates from the servers.

$
0
0

I am in a process of using robocopy to transfer data between servers.  Destination is Server 2012.  Here is my command

robocopy "d:\Kiosk" "\\POPCORN\Kiosk" /e /zb /Copyall /mir /secfix /sec /log+:C:\copy.log

It seems that the modified data is copied OK but once I click on the folder it all changes to the date it arrived on the new server.

Any help would be greatly appreciated.

Thanks.

Just after the files are copied the 


patyk


DFS with high availability using dfs replication

$
0
0

Dear All,

I have  a DFS file server with following setup

2 windows 2012 r2 server with DFS, AD based  dfs name space is created and all the share folders are created on server 1, and a replica is added to server2, all the files are replicated perfectly.

If my server 1 is failed, will all the users get the shared folders from server2 automatically?

how we can achieve high availability with DFS  ?

disk management showing multiple drives with same drive letter.

$
0
0

firstly i shrunk the 'F' volume.

then when i m trying to merge it with 'f', it shows 3 drives having same name n drive letter.

but when i put it with 'C' drive it worked well.! 

i was trying to merge two drives in order to get one empty drive to install ubuntu.

now even shrinking is not working. only the 'loading' icon keeps on rotating, nothing hapens.

lack of knowldge resulted in this, plz tell what can i do now?

Local Storage Spaces on Hyper-V cluster nodes

$
0
0

Hello,

Have Hyper-V cluster with several nodes and SMB storage for it.

Need to attach local SSD storage to hyper-v node servers.

If node Hyper-V server configured that it's local storage bus is RAID, then i can create local storage spaces pool without problems.

If node Hyper-V server configured that it's local storage bus is SAS, then storage spaces subsystem becomes "Clustered Storage Spaces" and i have problems. Can't create local storage spaces pool.

How prevent storage spaces subsystem becoming "clustered storage spaces" ?

DFS Replication issues

$
0
0

HI There,

We have 10 file servers in our environment with DFS-R enabled. As of now DFS replication data is placed in D drive.

We are moving replication data from D drive to E drive due to lack of space in D drive on few servers. Below is approach we followed in order to avoid replication of the entire data from the central DFS-R server;

1. Stop DFS-R service on target server

2. Copy the data from D drive to E drive

3. Adjust target replication folders to E drive in DFS management

4. Enable the DFS-R service

However after the changing the drive, I see that DFS-R starts replicating entire data in the replication folder irrespective of that data already exists. 

Is there a way to avoid this?



Mahi

Network Shares on File Server Windows Server 2012 Datacenter respond with high latency/delay

$
0
0

Hi All.

In the past months, on several occasions, network shares on a file server Windows Server 2012 Datacenter  became highly unresponsive it takes up to 20-25 seconds just to open  a DFS share. Restart solves the issue. It happened twice last week.

The server is virtual machine (VMWare tools ver. 9.4.11.2400950), member of Domain (2003 scheme).

Please let me know if I may provide more useful information.

Regards,

DH

Lost Files Server Access After Problem of loosing POLICY and write cache enabled Events

$
0
0

Hello, Dear,

Since 1 week, my Files Services Access is not possible and it seems that the Policy is stopping on the Windows 2008 Server. I had a look on the Dell Server Administrator but there are not "Hardware problem".

A different times, no users can access the File Services. When going on the server I see some Events regarding "Group Policy Failed", after rebooting the server manually directly from the server, everything is ok and all the users can go back to work on the different "Shared Folders". 

1) I noticed on the "Administratives Events" that theGROUP POLICY FAILED:

    "The processing of Group Policy failed. Windows attempted to read the file \\truberries.local\sysvol\truberries.local\Policies\{31B2F340-016D-11D2-945F-00C04FB984F9}\gpt.ini from a domain controller and was not successful. Group Policy settings may not be applied until this event is resolved. This issue may be transient and could be caused by one or more of the following:
    a) Name Resolution/Network Connectivity to the current domain controller.
    b) File Replication Service Latency (a file created on another domain controller has not replicated to the current domain controller).
    c) The Distributed File System (DFS) client has been disabled."

2) After the Group Policy Failed, “Administrative Events” shows also that "Data Corruption may occur":

  • The driver detected that the device \Device\Harddisk0\DR0 has its write cache enabled. Data corruption may occur.
  • Volume Shadow Copy Service error: Unexpected error calling routine RegOpenKeyExW(-2147483646,SYSTEM\CurrentControlSet\Services\VSS\Diag,...).  hr = 0x80070005, Access is denied.

.

 

Operation:

   Initializing Writer

 

Context:

   Writer Class Id: {e8132975-6f93-4464-a53e-1050253ae220}

   Writer Name: System Writer

   Writer Instance ID: {d272cca4-3c59-4411-8d5c-1b36555903b8}”

  • “Active Directory Domain Services could not disable the software-based disk write cache on the following hard disk.

 

Hard disk:

c:

 

Data might be lost during system failures.”

The RAID Controller is a Software based controller, there is 2 Disk in Raid 1.

please let me know if you have heard about this issue, thank you.

:-)

                    

Georgios

Moved User folders to new file server and permissions didn't come with

$
0
0
We recently switched to a new file server and everything was fine at first. Once users started requesting access to another person's User folder it would give them access to anything new they created since the change over but would get an "access denied" error from any folder/file previous to the move. I've been messing around with the "Takeown" command and "Icacls" but could really use some expert advice. Thanks in advance!!

Why should we create a total VHD when we made RAID some hard disks as mirror?????

$
0
0
why should we create a total VHD when we made RAID some hard disks as mirror???????

2012R2 Local Deduplication of VMs?

$
0
0

According to the MS blog Deploying Data Deduplication for VDI storage in Windows Server 2012 R2, a separate file server must be used to deduplicate VDI virtual hard drives.  Here is the relevant quote:

"First and foremost, to deploy Data Deduplication with VDI, the storage and compute responsibilities must be provided by separate machines."

Then there are these two quotes from the MS blog Extending Data Deduplication to new workloads in Windows Server 2012 R2:

"We also realized that all of this would take up resources on the server running Data Deduplication. If we were to run this on the same server as the VMs, then we’d be competing with them for resources. Especially memory. So we quickly came to the conclusion that we needed to separate out storage and computation nodes when Data Deduplication was involved with virtualization."

"we do not support deduplication of arbitrary in use VHDs in Windows Server 2012 R2. However, since Data Deduplication is a core part of the storage stack, there is no explicit block in place that prevents it from being enabled on arbitrary workloads."

I'm trying to determine what's a best practice vs what's enforced by the OS.  Based on the last paragraph quoted above and the fact that there is no restriction to prevent a Hyper-V server from being a file server as well, I suspect it would be possible to have the RDS-VH VDI (Hyper-V) host deduplicate its own guest virtual hard drives so long as they were in a shared folder on a non-system drive (in spite of the fact that said folder is technically local).  Operating under the assumption that the server in question has plenty of processing power and memory to spare, is there anything in the OS to prevent such a configuration?

DFS Member servers lost, need to restore

$
0
0

Due to severe data corruption, we lost some Windows servers. We are working to rebuild and restore, but have some questions about the process.

We are (were?) running Windows Server 2008 R2, using DFS-R to replicate data to and from another location. Fortunately the data still exists, but now we need to rebuild the destroyed member server. I am concerned that simply rebuilding the server with the same identity and adding it back to the replication group will result in the loss of data on the other member server. Another fear is that restoration of data from tape will result in any newly created/updated data being overwritten by the older data from tape.

Can anyone outline the complete steps needed to rebuild a DFS-R member server, and bring it back with minimal downtime and zero data loss? In all my searching, I have not found any documentation on this process. I've been in IT a long time, but have minimal experience with Windows servers; we just migrated from Novell a couple years ago.

Thanks in advance,

Shawn

DFS replication Issue,.

$
0
0
  1. We have observed that there is difference in the free space in A server , B server and C forj:\Data
  2. The oldest file available inA server is 23/1/2015 in replication folder but for the same interface in C server for 12/10/2013.

How is it possible? Replication are OK. All 3 servers server added into replication partner each other.

Recent replication folders and files are replication each other without any issues.

Please help !!!

some network shares not accessible, but if you map as network drive works fine!!

$
0
0

Hello All,

Recently we started receiving an error on accessing the Network Shares in our LAN, its giving a strange error "The specified path does not exist"/ "We can't find the <address>, please make sure you're using correct location or web address". I checked below article regarding this issue but didn't solve my problem.

http://www.megaleecher.net/Fix_Windows_Cannot_Access_Error

Strange thing is that we don't face this issue with all the shares on the same server but some of them, very random, no common thread. Another interesting observation is that I am able to open the Share as a Network Map Drive not directly from Hyper Link.

Your inputs are valuable on this

Thanks

Uwaiz

Robocopy commands - Find which DFS folder on DFS servers holds the most updated copy

$
0
0
Hi,

I was hoping someone could help me on this issue.

I'm trying to consolidate one of my existing DFS replication folders into one folder which holds the latest/updated copy of that folder and then STOP replication to that folder. At the moment DFS reports showing replication errors and I've realised I don't need this folder to replicate anymore as it has become obsolete (not needed moving forwards).

Before I stop replication, does anyone know a Robocopy command to find out about more about the member folder across 4 DFS servers? And which of those servers holds the written/updated copy of subfolders/files? I just want to make sure when I consolidate into one folder, I have the most recent updated copy of that file.

Thanks.

Server 2012 R2 ODX and max file fragmentations

$
0
0

I am reviewing Microsoft's ODX documentation.  It states the following:

The Windows host also limits the number of file fragments to 64. If the offload read request consists of more than 64 fragments, Windows fails the copy offload request and falls back to the traditional copy operation.

I have two volumes (S: and T:) on an iSCSI SAN.  I have a test file 960MB.bin on my S:\ drive.  My disk editor shows that this file is comprised of 180 clusters:


I then copy the file from the S:\ volume to the T:\ volume.  During the copy operation, I monitor the traffic going to the iSCSI volumes.  I see 180 back-to-back ODX transfers comprised of the following sequence:

  • Source Volume:  CDB - Populate Token
  • Source Volume:  CDB - Receive ROD Token Information
  • Destination Volume:  CDB - Write Using Token
  • This repeats a total of 180 times until the file copy is complete

So when Microsoft's documentation states that The Windows host also limits the number of file fragments to 64, what is it referring too?  If it's referring to the file fragmentation like I show above, why did ODX engage for me?  Is there a difference between Server 2012 and 2012 R2?  Is there a registry key to adjust this setting?


Storage Tiers Optimization Fails

$
0
0

I have a two node Hyper-V cluster configured with Storage Spaces on Windows Server 2012 R2. I have 3 CSVs. The third CSV (CSV3) has SSDs and is configured for storage tiers. In my initial testing, I put one VM on that storage and ran the Storage Tiers Optimization Report. It ran fine and showed all data served from the SSD tier (which was expected, it was a small server).

I've since added several more servers to the SSD tier. I attempted to run the optimization report again so I could see the usage and it fails with the following error:

The operation requested is not supported by the hardware backing the volume. (0x8900002A).


Nothing has changed on the server. I haven't installed any updates or other software. Has anyone encountered this error? I can't find any information about it. The only thing I can think of is the server that owns the storage isn't activated (waiting on Finance to approve the quote...) Could that be the issue?

DFS Error 5002 4102 & 4004

$
0
0

Hi all how are you today, Here is my Issue i'm attempting to setup DFS on our network and its not working quite how I expected it to. I have a Domain Controller at a branch office running Server 2012 that connects to our Domain Controller here at our main office running Server 2008, now the DC at our branch office also functions as the file server considering it has more than enough storage space on it. At the main office we have 1 DC and a file server running Server 2008 and I setup DFS on our file server and configured a namespace and replication group for the folder I want to replicate between the two servers  and then I get errors

4004

The DFS Replication service stopped replication on the replicated folder at local path C:\Storage$\User Files$. 
 
Additional Information: 
Error: 9098 (A tombstoned content set deletion has been scheduled) 
Additional context of the error:   
Replicated Folder Name: User Files$ 
Replicated Folder ID: 6370E6E5-602C-4F40-96AC-6C52DA802B4E 
Replication Group Name: domain.local\storage$\user files$ 
Replication Group ID: 76E6FA4D-1636-479E-A38D-79BC6B93065D 
Member ID: B94E20A8-355F-451A-9169-F4F42D8CEAE4

4102

The DFS Replication service initialized the replicated folder at local path C:\Storage$\User Files$ and is waiting to perform initial replication. The replicated folder will remain in this state until it has received replicated data, directly or indirectly, from the designated primary member. 
 
Additional Information: 
Replicated Folder Name: User Files$ 
Replicated Folder ID: 6370E6E5-602C-4F40-96AC-6C52DA802B4E 
Replication Group Name: domain.local\storage$\user files$ 
Replication Group ID: 76E6FA4D-1636-479E-A38D-79BC6B93065D 
Member ID: B94E20A8-355F-451A-9169-F4F42D8CEAE4

5002

The DFS Replication service encountered an error communicating with partner NETWORK-STORAGE for replication group domain.local\storage$\user files$. 
 
Partner DNS address: NETWORK-STORAGE.DOMAIN.LOCAL 
 
Optional data if available: 
Partner WINS Address: NETWORK-STORAGE 
Partner IP Address: 10.141.70.11 
 
The service will retry the connection periodically. 
 
Additional Information: 
Error: 5 (Access is denied.) 
Connection ID: 94BB212E-D6ED-45E0-B0A9-1F442F7CAC28 
Replication Group ID: 76E6FA4D-1636-479E-A38D-79BC6B93065D

in that order on the branch office DC. Now if I set up a new share on our main office DC with the same name and same permissions as the folder i'm trying to replicate between the main office storage server and branch office DC and then throw a few text documents and files in there it works flawlessly, I even put a 50gb folder into it to see if it would continue working and it did. So I dont know what is wrong but what I find particularly interesting in the 5002 error is the "Error: 5 (Access is denied.)" line of text under additional information. Now I seen people all over technet with that same error and it has never been solved, well this time we have to solve it guys so any and all help would be appreciated.


Viper Technologies Computer Repair Putting The Venomus Bite Back In Your Computer We Are Located In Antigonish ,NS Canada Check Us Out HTTP://WWW.VIPERTECHNOLOGIES.TK

Low disk space in C:/ drive while only 41 GB is consumed

$
0
0

Hello, 

We are using Windows Server 2008 R2 Enterprise. We have SQL Server 2008 R2 and MS Dynamics CRM 2011 installed on the machine. The total disk space is 1 TeraBytes, with 991 GB C:/ drive partion. 

Last morning when we tried to open the SQL Server, we found that we were not able to perform any operation because C:/ drive has 0% free space and is completely full. When I check the actual size of files in c:/, it was only 41 GB. We tried to perform several remedies but no luck. 

Urgent help is needed on this issue as project operations is getting affected. 

Kindly advise. 

Best, 

Fahad


Fahad Ali Shaikh

Unable to browse a remote share

$
0
0

On a Windows 2003 server, I have a dedicated hidden share where I store all Users' Home Directory folders.

Normally I can remote browse to the share via UNC.

However some folder remain hidden unless I either browse directly from the file server itself. This will work when using the UNC path as well.

I did see that the icon does look different. The ones I cannot see remotely are in yellow but with a sheet of paper at the top.


Disk punch via powershell and PSExec

$
0
0

All,

I have been struggling on getting some virtual machines to disk punch and getting the right syntax to work.

The below is the powershell script which was posted on "whats up duck". However getting this to run via a scheduled task posed me a number of issues since to get a scheduled task to show on a machine required a reboot (this was done via group policy), I couldn't get it to display or run any way else. The other alternative would be to do a script which would configure a scheduled task and then remove it, this was just a pain.

Content of Write-ZeroesToFreeSpace.ps1"

<#
 .SYNOPSIS
  Writes a large file full of zeroes to a volume in order to allow a storage
  appliance to reclaim unused space.

 .DESCRIPTION
  Creates a file called ThinSAN.tmp on the specified volume that fills the
  volume up to leave only the percent free value (default is 5%) with zeroes.
  This allows a storage appliance that is thin provisioned to mark that drive
  space as unused and reclaim the space on the physical disks.

 .PARAMETER Root
  The folder to create the zeroed out file in.  This can be a drive root (c:\)
  or a mounted folder (m:\mounteddisk).  This must be the root of the mounted
  volume, it cannot be an arbitrary folder within a volume.

 .PARAMETER PercentFree
  A float representing the percentage of total volume space to leave free.  The
  default is .05 (5%)

 .EXAMPLE
  PS> Write-ZeroesToFreeSpace -Root "c:\"

  This will create a file of all zeroes called c:\ThinSAN.tmp that will fill the
  c drive up to 95% of its capacity.

 .EXAMPLE
  PS> Write-ZeroesToFreeSpace -Root "c:\MountPoints\Volume1" -PercentFree .1

  This will create a file of all zeroes called
  c:\MountPoints\Volume1\ThinSAN.tmp that will fill up the volume that is
  mounted to c:\MountPoints\Volume1 to 90% of its capacity.

 .EXAMPLE
  PS> Get-WmiObject Win32_Volume -filter "drivetype=3" | Write-ZeroesToFreeSpace

  This will get a list of all local disks (type=3) and fill each one up to 95%
  of their capacity with zeroes.

 .NOTES
  You must be running as a user that has permissions to write to the root of the
  volume you are running this script against. This requires elevated privileges
  using the default Windows permissions on the C drive.
 #>
 param(
   [Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true)]
   [ValidateNotNullOrEmpty()]
   [Alias("Name")]
   $Root,
   [Parameter(Mandatory=$false)]
   [ValidateRange(0,1)]
   $PercentFree =.05
 )
 process{
   #Convert the $Root value to a valid WMI filter string
   $FixedRoot = ($Root.Trim("\") -replace "\\","\\") + "\\"
   $FileName = "ThinSAN.tmp"
   $FilePath = Join-Path $Root $FileName

   #Check and make sure the file doesn't already exist so we don't clobber someone's data
   if( (Test-Path $FilePath) ) {
     Write-Error -Message "The file $FilePath already exists, please delete the file and try again"
   } else {
     #Get a reference to the volume so we can calculate the desired file size later
     $Volume = gwmi win32_volume -filter "name='$FixedRoot'"
     if($Volume) {
       #I have not tested for the optimum IO size ($ArraySize), 64kb is what sdelete.exe uses
       $ArraySize = 64kb
       #Calculate the amount of space to leave on the disk
       $SpaceToLeave = $Volume.Capacity * $PercentFree
       #Calculate the file size needed to leave the desired amount of space
       $FileSize = $Volume.FreeSpace - $SpacetoLeave
       #Create an array of zeroes to write to disk
       $ZeroArray = new-object byte[]($ArraySize)

       #Open a file stream to our file
       $Stream = [io.File]::OpenWrite($FilePath)
       #Start a try/finally block so we don't leak file handles if any exceptions occur
       try {
         #Keep track of how much data we've written to the file
         $CurFileSize = 0
         while($CurFileSize -lt $FileSize) {
           #Write the entire zero array buffer out to the file stream
           $Stream.Write($ZeroArray,0, $ZeroArray.Length)
           #Increment our file size by the amount of data written to disk
           $CurFileSize += $ZeroArray.Length
         }
       } finally {
         #always close our file stream, even if an exception occurred
         if($Stream) {
           $Stream.Close()
         }
         #always delete the file if we created it, even if an exception occurred
         if( (Test-Path $FilePath) ) {
           del $FilePath
         }
       }
     } else {
       Write-Error "Unable to locate a volume mounted at $Root"
     }
   }
 }

So, PSEXEC then utilising a CMD then powershell to execute this script from a list was the best way. You need to run your CMD with your domain admin account (shift and right click CMD and run as different user) because the issues with PSEXEC is it will try to use your local rights to create the service which fails and gives you access is denied.

psexec -d -u domain\username  @C:\users\%username%\desktop\diskpunch\computers.txt cmd /c "\\dom.ain\NETLOGON\Diskpunch\diskpunch.bat"" (We're putting it in netlogon since we're using it again later)

Then put in your password

((IN YOUR COMPUTERS.TXT JUST LIST THE SERVERS AND PLEASE NOTE THE DOUBLE QUOTES ON THE END))

The batch file has the following

powershell.exe -executionpolicy Bypass -command "Get-WmiObject Win32_Volume -filter drivetype=3 | \\Dom.ain\NETLOGON\diskpunch\Write-ZeroesToFreeSpace.ps1"

This may seem simple for some of you but it really got on my nerves getting the config right so I thought I'd share incase anyone else wants to disk punch their Virtual Infrastructure.

Anyway, hope this helps someone.



Viewing all 7579 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>