Hi here the scenario,
Domain A has a fileserver , access to shares are restricted with ACL, I'm planning to disjoin the fileserver from domain A and will join to domain B , will it affect ACLs.
Thanks in advance,
Shamal
Hi here the scenario,
Domain A has a fileserver , access to shares are restricted with ACL, I'm planning to disjoin the fileserver from domain A and will join to domain B , will it affect ACLs.
Thanks in advance,
Shamal
Hi there.
We've got 4.5TB of data on an 8TB volume. It was deduped and we got around 60 % savings.. . Great! It worked well until our Mac users tried accessing via Mavericks/Yosemite. Now they're getting access denied errors when trying to save to the volume.
So Question 1 which is a bit late as I've already started unoptimization but is there a way around this from the Microsoft end? I've been through all the resources at the Apple end and they say the only option is to not use data deduplication.
Question 2. I've set an unoptimization job and it's taken about 16 hours to get to 100% but it's been like that for a couple of hours now. Is this normal? There is still a lot of disk activity and I can see lots of files still being written viewing the resource monitor. so is it just that the progress meter isn't that accurate?
Question 3. I started unoptimization @ 18:21. In the Deduplication/Diagnostics event log I can see a log entry
10240: "Unoptimization job memory requirements" at the start but then a lot of
8243: "Failed to enqueue job of type "Optimization" on volume "D:"". warnings. Is this normal to see these when unoptimizing?
10241: "Optimization reconciliation has started". @ 08:01
10242: "Optimization reconciliation has completed." @ 08:01
then a couple more
8243: "Failed to enqueue job of type "Optimization" on volume "D:"". warnings
10240 "Scrubbing job memory requirements."
then a couple more
8243: "Failed to enqueue job of type "Optimization" on volume "D:"". warnings
Question 4. The volume is getting very full. I expect it's because I now have data saved in chunks and as files but what happens if/when the drive fills up completely? Will it automatically do garbage collection? Will I be able to simple run it as a manual job or will I be in trouble? If so, what do I need to do?
Hi,
We have configured a Work Folders environment with a clustered file server.
Everything works great on Windows, but were unable to connect on the iPad app. It seems to authenticate with the server, because it knows if the password is right or wrong. When I enter the correct password I get the following error message:
"Unknown error:0x8007203D"
I'm unable to find any error description. Does anyone have a solution to this problem?
I am attempting to extend an existing partition using disk management on a Server 2008 R2 environment. The volume physically exists on a SAN, I expanded the size of the volume on the SAN, disk management sees the additional space as unallocated and it is directly behind the existing D: but the Extend Volume is still greyed out. Is my only option to use a third party software?
Hi I have two DFS servers that have stopped replicating with each other. One server is 2012 R2 and the other is 2008 R2. I have clicked Verify Topology and it says they are fully connected. I have
Everything seems to say that they connections are good but replication just isn't happening. I'm stuck on what to try next, so any help would be appreciated. Thanks!
Hi All,
I have a central file server replicating its data to 2 remote files servers with read only shares.
When a file is deleted on the central server, the files are not being deleted on the remote servers. New files on the central server are replicating fine.
Why is this and how do I fix it?
Thanks Christoph
I would like to generate a report with powershell which creates a report with folder permissions on an ntfs volume.
is there an easy way to set this up ?
Hi Guys,
I've applied disk quota through FSRM using 250mb template. When login from user 250 drive is connected.
but that user can also access other users data by accessing shared quota folder and also can delete data. How can i restrict a user to only his folder and can't access others data.
Thanks and Regards,
Muhammad Tayyub
Need to know what is wrong, please help with the following script:
set Drive2=C:
set flags=/MIR /W:10 /R:10 /A+:a /XF *.mp3 *.mp4 *.mpg *.m4p *m4a *.exe /XD "My Videos" "My Pictures" "My Music" Itunes Appdata "Application Data" Backups /TEE /FFT /LOG+:%Drive2%\Synchlog.txt
Rem Users Local PC's
set Drive1=U:
net use U: /Delete
net use U: \\
net use U: /Delete
net use U: \\10.1.2.34\ C$/User:10.1.2.34\
robocopy %Drive1%\
Users/<someone>/test %Drive2%\Users/<someone>/test %flags%
Hello all,
Running Robocopy locally on a Windows 2008 R2 server. By local I mean the source and destination have local drive letters on the server. Its a file server server cluster and I am moving user files from one LUN drive to another.
The issue is when I open the properties of the source and destination folders to compare the folders, I see a huge difference in files and folders count.
As an example, take User1. The source has 64,258 Files and 2,165 Folders. The destination has 66,248 Files and 2,350 Folders.
Here is the script I am running:
robocopy.exe "I:\DomainUsers$\User1" "V:\DomainUsers$\User1" /e /v /r:1 /w:1 /zb
Any idea what is happening here? I have noticed this on a few other times as well.
Appreciate any help I can get.
Hi,
whats the best way to add a new target to Namespace and replication. Goal is to replace a old file Server at the end.
I did the following:
- copied the share with robocopy incl timestamps of files and folders
- created share
- added the new share as a new target as well as meshd member of the replication connection
- disabled the new member in the Namespace, so no one can Access it until dfsr is fully done and initialized
After the the new dfsr Connection was replicated through AD to all 4 Members (3 different site, 1 same site) the
following happend:
dfsr begin and almost every file was in a conflicted and copied over the the Conflict Folder. Almost all timestamps
of the Folders were changed to the current date, but the timestamps of the files not.
Thousands of eventlogs: 4412
The DFS Replication service detected that a file was changed on multiple servers. A conflict resolution algorithm was used to determine the winning file. The losing file was moved to the Conflict and Deleted folder.
Any idea why? Later on i disabled the Connections to the remote Fileservers, but that did not stop it.
My idea was to pre-seed the files with robocopy. So what would be the best way to prevent that for the next share ? Is it a better way to just add the target to a bi-directional Connection to the local Fileserver without adding to DFS-N and without copying
the files before ? Is it better to let DFSR do the hole Initial sync incl Files ?
At the end i have no loss of date but to check almost every file for conflict took Ages to finish.
Thanks a lot,
Marco
I have setup a File Server Resource Manager task under Server 2008 R2.
The task has the following properties:
- Type: File expiration
- Scope: W:\Share\Company
- Expiration folder: S:\Archive
- Days since file was last modified: 1460 days (4 years)
The task runs, however "Expired" files are not being moved the to the Expiration folder.
For example, a file with modified date "Thursday, 23 February 2006, 2:42:27 PM" is not moved to S:\Archive.
Someone please help?
Hello,
I have a Win 2008 R2 server running 10 internal drives and 1 external USB drive. Internal 1 SSD for boot, ASUS optical and 6 x 3TB spinner + 2 x 4TB spinner. The oldest drive is 18 months. They are all seagate drives. The 4TB are NAS drives.
Seatools sees 4 of the 6 3TB, the SSD and the USB drive. It doesn't see 2 of the 3TB drives.Of the drives listed in Seatools, all have passed every test I have run.
The server keeps hanging every couple of hours requiring a power reset to get it back up.
I am seeing some errors in event log (but not many)
atapi errors on ide\ideport8
disk error on disk\dr9
I have backed up the contents, done a full format on each drive (I do not run raid - Each drive is partitioned for a specific pupose).
So how do I know which drive is failing? Do the drive numbers on Disk Management faithfully map to the errors I am seeing in event logs?
thanks
Tanya
Hello,
I am trying to create cluster with shared storage spaces pool. My JBODs are connected to two servers with dual port FC HBAs with thi setup failover cluster manager pass all tests. When I create storage pool with wizard pool appear in server manager but never in failover cluster manager GUI! Any ideas?
Thanks!
I cannot copy files from a Deduped volume to a new volume and am getting the following error.
A Data Deduplication configuration file is corrupted. The system or volume may need to be restored from backup.
Context:
File name: R:\\System Volume Information\Dedup\Settings\dedupConfig.01.xml
Error-specific details:
Error: Empty XML configuration contents, 0x80565310, Data deduplication encountered a corrupted XML configuration file.
Is there any way to recover from this if I dont have a backup of the config file. This is my Veeam backup server and is not backed up itself since Veeam copies it config files to another server.
Scenario I am trying to achieve is this:
Windows Server 2012 R2 serves as iSCSI Target configured to have 1 iSCSI Virtual Disk
2 Hyper-V servers connecting to this target with iSCSI Inistator and have multiple targets for that iSCSI Virtual Disk using CHAP
**These 2 are nodes in fail over cluster, this iSCSI is added as a CSV.
Issue that I have, is that you can only have 1 target per iSCSI Virtual Disk
Both Hyper-V servers can connect to this LUN without issue when I add both initiator IDs to the target, but once I enable CHAP, you can only put one initiator ID in the "Name" field, so I can only connect from 1 Hyper-V server.
Do you know of a way around this?