Quantcast
Channel: File Services and Storage forum
Viewing all 7579 articles
Browse latest View live

Connecting a 2008 R2 server to a VNX 5300 and Mounting a Lun as both a Drive and Folder via FC?

$
0
0

Connecting a 2008 R2 server to a VNX 5300 and Mounting a Lun as both a Drive and Folder via FC?

I’m trying to connect an HP server with a dual Qlogic HBA card to several LUN’s on my EMC VNX 5300. The HBA card can “see” the VNX. 

On the VNX side I’ve created 6 LUN’s, registered the HBA card within the VNX, crated a Storage Group and added the Host and LUN’s to the Storage Group.

On the Server side I’ve added the Storage Manager for SAN’s feature. When I try to add the LUN’s I get this message that says:

 “Unable to find the Virtual Disk Service (VDS) {I’ve verified the service is running} or any hardware providers installed on (servername). To use Manager for SANs, VDS and at least one hardware provider must be installed. Check the VDS and VDS hardware providers instillation.”

I can find lots of information on iSCSI but nothing on FC Hardware providers. I’ve installed SAN Surfer and it can see the VNX, but not can’t seem to find just what needs to be installed and or activated to suffice as “hardware providers”

Ultimately I need to Mount a LUN as a Drive then mount several more LUN’s inside the Drive as Folders.

I don’t’ have a choice on the configuration, this is coming from the Gov customer.

Bryan

Arlington VA


Increasing the change journal size

$
0
0

We're an engineering firm and we have 20 branch offices and we need them to work together. We use DFSR but only in instances where we just need files replicated from office to office not as a file control system. We purchased some software that does replicate files and file locks to offices but we're having some issues with it.

One of the issues is sometimes a resync is required if the software detects that the Windows change journal has gone beyond the last check point for the software. This takes time.

One of the responses was to increase the windows change journal size.  I've had an issue in the past (Server 2000) where we had a journal wrap error in DFSR so I know what that's about but since we've moved to 2003 and now server 2008 r2 I've had no issues.

So, when a software company says the "Windows Change Journal" I'm guessing they are referring to the USN journal. I guess windows sets the size of this journal automatically based on the size of the volume. 

So before I just increase the max size of a windows system file away from its default I wanted to do a little research about this. Are there issues with changing (increasing) the max size of this file?

As stated this is Windows Server 2008 R2 Enterprise.

Thanks

RS 

FTPS issues

$
0
0

Hi

I have setup a new FTPS server in Windows 2012 r2 and whilst I can connect I cannot write any files to the root FTP folder using Winscp or Filezilla, the error I get is "550 The supplied message is incomplete. The signature was not verified.". I am using a Windows 8 client to connect.  Having done some research I found that KB2888853 addresses this issue, which is an issues with TLS1.1 and TLS 1.2 apparently.  The trouble when I try and run the installer for the hotfix I get an error stating that "The update is not applicable to your computer" so I can't apply it.

According to https://support.microsoft.com/kb/2888853/?wa=wsignin1.0 server 2012 r2 is one of the OS's that this applies to, anyone else had this issue?

DFSR Event ID 2213

$
0
0

We've been seeing a greatly increased number of 2213's over the last month. Fixing them each time isn't an issue, but I haven't been able to find a root cause of why it keeps happening in the first place. It's not limited just to one or two servers, either, and while some may happen after scheduled reboots, others happen in the middle of the day, outside of regular backup hours. Some are on physical servers, and some are on VM's.

Does anyone have any troubleshooting ideas, or a list of all known causes of unexpected shutdowns? Something, anything I can use to try to track the causes down?

[windows r2 2012] DFSR for one server

$
0
0

Simple as that guys and gals. Need to be able to make it so if a user is editing a file then anyone else will be locked out of the file or in a read-only state. 

Can you help? I can not find anything about this. For 3 servers yes, but for one?

Recovery of service deleted files

$
0
0

When a local system account service deletes a file, does it go to a recycle bin or is it permanently deleted? If to a recycle bin, is is possible to recover that file?

Thanks,

Ed.

Need a better automated backup solution

$
0
0
Currently, I use windows backup for all of my systems, which simply backs up to a shared folder on the network (have an old PC that I use as a NAS + VPN (in a VM) + game server (in a VM))

The main issue with windows backup is that you have to eventually manually remove old backups because it refuses to delete old backups, even if you have 5 versions of the same tile.

In looking for something better, I would really like to find something that will allow me to have one system image, and backups that that will simply keep track of file changes, but automatically begin deleting old versions of files when the drive is around 90% full.

2012r2 dual parity virtual disk capacity incorrect

$
0
0

I have a 2012r2 storage server with 12x 6TB HDD. The drives shows up with 5.46 TB capacity which is expected. What is unexpected is creating the array in powershell. I would like to setup the 12 drives as a single dual parity space. Using the command

New-VirtualDisk -StoragePoolFriendlyName 72tb -FriendlyName HDD_Parity -UseMaximumSize -ResiliencySettingName Parity -ProvisioningType Fixed -PhysicalDiskRedundancy 2 

I get a reported capacity of 48.88 TB. Using n-2 x 5.46 TB I should see 54.6 TB however 48.88 / 9 = 5.43 TB or roughly n-3. Specifying columns manually at 12 gives the same results, columns 11 drops the capacity by one more drive and so on.

If I switch routes and use

New-VirtualDisk -StoragePoolFriendlyName 72tb -FriendlyName HDD_Parity -UseMaximumSize -ResiliencySettingName Parity -ProvisioningType Fixed -PhysicalDiskRedundancy 1 -NumberOfColumns 6 

which if I am not mistaken is 2x 6 drive single parity arrays striped or spanned? I get 54.57 TB which aligns with n-1 x 2 or 5.46 TB x 10.

So where is the missing drive with the dual parity virtual drive? It's not reported as a hot spare and I would rather not throw away the capacity of a third drive that isn't being used in any conceivable way.



SMB 3.0 HA

$
0
0

Hi, 

I looking for a method to cut down storage cost's within our cloud Provider. We currently use a SAN for file storage and it is not cheap. (Its fine for SQL) As we are looking to have our own domain within the cloud network, I would like to explore the route of using SMB 3.0 to store our application files. 

Would I be able to use SMB 3.0 in an active\active or active\passive state without the use of a SAN? 

The idea being, I would have 2 servers, with large storage attached, these 2 servers would act as file servers for the applications servers, when it comes to restarting servers, one remain active so the application servers could still pull the files. 

Any thoughts? 

Thanks 

procmon filecreate duration takes multple seconds

$
0
0

Hello, 

Currently I am trying to find out why the citrix logons are randomly slow on random users.
While capturing the logon proces with procmon the duration of CreateFile takes: 9.6457329 seconds. ( during a slow logon)

with a normal logon this amount is less than 0.0000xx.

the share is sitting on a windows 2012 cluster resource. When changing the path to another (non cluster) server the issue does not occur.

What could we do to troubleshoot further?



DFS Replication folder mess up (Change)

$
0
0

Site

No Namespace just replication

Server A > W2k3R2 Ent x64

Server B > Windows Storage Server 2003 SP2

Server C > Server 2k8 R2 Ent

We do not have full replication in place yet. Just Server A to B (local) and then B to C (DR Site)

I had a Server C that died so I built Server C to replace it. I went to add Replication and did not add the local path correctly. I left it the root of D: ... Now all the data starting copying over to Server C D:\root so I disabled it knowing of my mistake. All I wan to do do is have  the data start going to the NEW local path on the new server I will create. I'm afraid if I leave B to C disabled and create the new "membership path" ... once created the data still trying to go to root of c will take off and the new local path will populate too? I'll try to show what I want to do to see of it makes sense.

Leave Connection to B to C disabled.

Go under root of C Server  (D:\ and delete what as copied over so far)

Delete Membership in Replication for the C server and recreate new membership to correct location.

Turn on Replication from B to C with new location so replication starts.

Not sure if this will work since we have that 60 day "tombstone" setting from DFS. I'm afraid that even though I deleted the data from Server C that when replication is stated back up it may in some way try to go back to Server B ?? Again I have replication one way B to C.

Make any sense?

Work folders for a group

$
0
0

Hello

do it possible to configure work folders for a a common shared documents folder and not individual user folders? I think, no.

So, is it possible to configure work folders with another login that the user who connected on the workstation ?

Thank you.

networking into File server 2008 R2

$
0
0

I am Sbu am a I.T  Technician at  Koukamma Municipality ,   I've been experiencing a problem on laptop user's account .

when the user's connect  again to our organization network sometimes I got this massage " windows cannot access\\ server name \Redirected$\username\ desktop "... and the user's that are using Desktop PC's they don't get this error  . what can I do to prevent this from happening again  please help .

Migrate data to new folder structure

$
0
0

I need to migrate a lot of data to a new folder structure, one of the things I will have to deal with is file paths which are too long.

To copy the data I have made a powershell script which uses robocopy to copy all data.

Of course I do not want to have disruption in this process (or the least disruption possible), what can I do to prevent issues with long file paths ?

Is there an easy way to modify my script and detect issues with long file paths ?

What would be the way to go preventing many copy errors during the robocopy action and fix possible issues before starting ?

Best way to replicate a directory share to a new Windows 2012 serv, keeping all permissions and timestamps the same?

$
0
0

We have a legacy Windows 2008 R2 server, with a single network share on it, that was acting as a primary file share.   Inside the network share are about a dozen different folders (with subfolders) all with different permissions applied to them at the folder and file level.   

I want to migrate/copy this entire directory structure to a new Windows 2012 Standard server, keeping all the timestamps, permissions, etc. exactly as they are on the old directory structure.  Both servers are part of the same single Windows 2008 R2 domain.

What would the best practice method for achieving this?  Using XCopy?   If so, what syntax would I want to use with the command to make sure I don't lose any permissions or change any data and timestamps during the migration?


Possible issue with DFS and CSC error 80070035

$
0
0

I have a handful of users who have a strange, recurring issue with Offline Files and DFS in Win7 SP1 x64.

We have a DFS root \\domain.local\DFS. Server ukln1fs1 is a root replica, running a fully patched instance of Server 2012 R2. dfsnamespace is a DNS alias of ukln1fs1, and the SPNs for host/dfsnamespace<.domain.local> and cifs/dfsnamespace<.domain.local> are registered with that server.

Clients have the Documents folder redirected by GPO to \\dfsnamespace\DFS\-teamfolder-\-username-\docs and redirection works fine. 

Sometimes when clients are disconnected from the network and then reconnect, or when they start up disconnected from the network and then connect, they are unable to connect to\\dfsnamespace\dfs. They get error 80070035. Clients can connect to\\dfsnamespace fine and to the individual shared within the DFS structure. This affects all users on the computer once it has begun occurring, and the only resolution is to restart the computer whilst connected to the domain.

Kerberos is using TCP (MaxPacketSize 0)

LanManServer & LanManWorkstation signing requirements match (EnableSecuritySignature 1, RequireSecuritySignature 0)

Have used FormatDatabase on the CSC service to rebuild the offline files cache.

Latest hotfixes for Win7 file services and offline files components are installed: KB2775511 (enterprise hotfix rollup), all latest hotfixes from KB2820927 (collection of enterprise hotfixes including offline files and folder redirection components), all latest hotfixes from KB2473205 (file server technology services).

Adapters & bindings order has the SSL VPN adapter at the top followed by the NIC then the wifi adapter. IPv4 is a the higher priority protocol in adapters & bindings.


Problems accessing Server 2012 from another Server 2012

$
0
0

Hello,

I have 2 servers, one running Server 2012 and the second running Server 2012 R2. Both of those servers are to be used in DFS Replication.

The problem I am getting is that 90% of the time I am unable to access one server from the other one. I've tried:

\\servername

\\IP

\\FQDN

And none of those work. Only occasionally it connects but drops after 5 minutes or so.

I can ping both servers from each other without any problems. What is more, I can access any of those servers from any other machine that's running any Server version prior to 2012. I can also access any of those servers from a Windows 7 client.

The servers are on different subnets, in different offices connected via VPN. However I think this is irrelevant at this point as, like I mentioned, no problems appear when using any of the server versions before 2012.

To me it looks like a firewall issue, however I'm not sure which option is responsible for this.

Please advise.

Kind regards,

Wojciech

Cannot delete network folders - thumbs.db "The action can't be completed because the file is open in Windows Explorer"

$
0
0

I have applied the GPO user setting "Turn off the caching of thumbnails in hidden thumbs.db files" to every device on my network. It has been applied for weeks.

When I attempt to delete a folder on another server, that I have full access to and could otherwise delete with no issue, probably 50% of the time I will receive the "The action can't be completed because the file is open in Windows Explorer" message for the thumbds.db.

Every thread (of the hundreds about this clear 100% bug) says to enable the policy I have to fix this. I have, I've confirmed via gpresult /h and checked the registry key it creates under the user profile. I have no idea how a user policy is meant to stop what I think is the server creating or locking these files, or how with the GPO applied anything should be accessing the thumbs.db.

I am unsure what steps I can take now to ensure 100% of the time I can delete a network folder. This is such a simple thing I want to do and is stressing me to no end not being able to fix such a simple thing.

See below. This is from a server where I am trying to delete a folder located on another server. Notice the registry setting is set where it shouldn't be generating (and therefore I assume in anyway locking or accessing) the thumbs.db. I have confirmed and every other client that might touch this folder definitely has this same registry key set.


delete BDE after the C drive has been changed to Dynamic

$
0
0

I've been trying to expand my C drive on a 2008R2 VM, but the BDE volume was in the way.  In reading a different thread I took the advice to convert the disk to dynamic, before realizing that I should have just run the bcdboot c:\windows /s c: command to delete it.  So now my C drive and my BDE drive are both dynamic discs, and it wont let me delete the BDE Drive.

Cluster Storage

$
0
0

We have three hyper-v host server (windows 2012) in cluster environment. We have SAN which is mounted as cluster-storage on all three server. Hyper-V1 has ownership of the disk.

Recently we increase the disk space on SAN volume and it reflect in the cluster disk but not in the cluster volume. It shows the correct size in the disk management on all servers, but does not show in the cluster-storage.

Please see the attache screenshot to understand more clearly.

Can someone help me how can I resolve this issue?

Thanks


Viewing all 7579 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>