When provisioning user accounts in Office365 via PowerShell you may encounter an issue when trying to assign a licence sku to the user account for Office365 Services.

As an example i have added some PowerShell commands below to illustrate the creation process of a user in Office365 via PowerShell.

$msolcred = get-credential
connect-msolservice -credential $msolcred

$msolsku = get-msolaccountsku
New-MsolUser -UserPrincipalName test@contoso.com -FirstName “Firstname” -LastName “Lastname” -DisplayName “Firstname Lastname”
Set-MsolUserLicense -UserPrincipalName test@contoso.com -AddLicenses $msolsku

You will encounter an error if you try to add a licence sku to the account:

Set-MsolUserLicense : You must provide a required property: Parameter name: UsageLocation

The UsageLocation is a parameter that ties into Microsoft Online Services International Availability, and without configuring it you will not be able to assign licences to a user account.

This setting can be configured via the Set-MsolUser cmdlett in the following manner:

Set-MsolUser -UserPrincipalName test@contoso.com -UsageLocation <countrycode>

The sku can now be configured for the account.

Set-MsolUserLicense -UserPrincipalName test@contoso.com -AddLicenses $msolsku

Alternatively you can add this setting to your scripts when creating your accounts.

New-MsolUser -UserPrincipalName test@contoso.com -FirstName “Firstname” -LastName “Lastname” -DisplayName “Firstname Lastname” -UsageLocation <countrycode>

 

If you are uncertain what your or your client’s country code is you can look them up by country at the “International Organization for Standardization” website: https://www.iso.org/obp/ui/#search

 

 

The problem

I encountered the following situation when i was rebuilding my lab environment for my studies regarding SharePoint 2013, the disk active time on my data disk (holding the vm’s) would go up to 100% and the actual data write speed was dropping to 100KB/s where i was expecting around 60MB/s. This behaviour was shown for both Hyper-V and VMware Workstation when virtual machines were installed, or just idling after installation.

My configuration of this machine is as follows: an i7 processor, 32GB ram and a small SSD as a boot disk, and a larger SATA3 disk for storing the VHD’s. I know using an SSD for the VHD’s would be a better experience but time and financial constraints are what they are and since this is a lab environment a SATA3 disk should work fine for my purposes.

The image below shows the issue where active time is extremely high and throughput is extremely low:

xVP5i

image used from http://superuser.com/

 

I took the following steps to try and alleviate the issue, as my belief was first that it had to be some driver related issue i looked at installing the latest drivers.

  • Install latest Windows updates to the Host.
  • Install latest version of the Motherboard drivers.
  • Install latest version of the Intel INF driver package.

After these changes the issue was however not resolved, and i looked at improving my experience with changing settings on the disk and hypervisor applications.

  • Reformat the drive using the largest available allocation unit size.
  • Create single file virtual disk files in stead of multiple smaller files.

None of these options had the desired effect of easing the drive and improving read/write performance, the VM’s were constantly freezing and the disk still showed 100% active time on the host.

The solution

What did work however was the following suggestion from Dan Sewell a user on the superuser.com website:

  • Changing the Power Options settings from Balanced to High Performance.

Changing the setting, had an immediate result on the disk active time and even running multiple VM’s had no performance degradation on the disk from this point on.

You can find the link to the article and answer here: http://superuser.com/a/511481

I hope this helps you if you experience this issue.

 

 

I was trying to improve one of my VDI machines and decided it was a good idea to enable RemoteFX on the host (Windows 2008 R2 Service Pack 1) to improve the responsiveness of the VDI machine (Running Ubuntu 12.4 LTS in this case) by giving it some extra graphical oomph.

My Hyper-V lab host is also running as the Domain Controller for my lab domain, and as it turns out this combination is not able to work properly. There is no error message, nor is there a warning when you install either role on the server.

The only sign there is something wrong is the fact that the Hyper-v Manager will hang on ‘Loading Settings’ when you open the Settings for a Guest or that the machines enabeled with RemoteFX will fail to load at all.

To cut to the chase Microsoft does not offer a solution where this configuration will result in a working RemoteFX enabled machine with the Domain Controller role installed on the Host. In stead it has been declared the issue is by design and therefore not supported.

Technet article: http://support.microsoft.com/kb/2506417

You have a server that is running the Remote Desktop Virtualization Host service in Windows Server 2008 R2 Service Pack 1. When you configure Active Directory on the server to add the server as a domain controller, you experience the following symptoms:

  • All existing RemoteFX-enabled virtual machines do not start.
  • An administrator cannot create a new RemoteFX-enabled virtual machine.

I have not been able to find a way around this, several articles on the web suggest different ways to remedy the ‘Loading Settings’ issue. But all solutions end up either uninstalling or disabling the RemoteFX role or driver or moving the Domain Controller services off the machine.

Looks like i will be re-purposing some hardware to get my RemoteFX fix on.

RemoteFX whitepaper download from Microsoft: http://www.microsoft.com/en-us/download/details.aspx?id=13864

Stopping the RemoteFX VideoCap driver via command line

dism /online /disable-feature /featurename:Microsoft-Windows-RemoteFX-EmbeddedVideoCap-Setup-Package

A while back i encountered the following events at a customers SharePoint Farm: Event ID10038 Query server removed from rotation.

The full event is shown below:

 Rule ID: Microsoft.Office.Sharepoint.Server.2007.Query_server_removed_from_rotation

Description: This alert indicates that a query server has been removed from the load-balanced rotation because IIS has encountered errors while communicating with the query server.

 Rule Category: Event Collection
Rule Target: Microsoft.Office.Sharepoint.Server.2007.MOSS.Application
Alert Type: Error
Event ID: 10038
Event Source: Office SharePoint Server Search

This error can occur if the architecture of the farm has the functionality of the Office Search Service split up between the index and dedicated Query servers. It however does not mention what is the cause of this problem, nor does it even classify the issue as a problem as the Event is shown as a warning.

However when a query server is removed from the rotation, the propogation of the index will not occur on that server and SharePoint will divert user queries to other Servers running Office Search Service with the Query role. When this happens often and on multiple servers the performance of the Office Search Service

IIS WAMREG admin service

In this case the actual problem was a missing “Local Activation” permission setting on the DCOM Service “IIS WAMREG admin Service”. This service is a part of the IIS “Web Application Manager” (WAM). The WAM provides the inter-process communication (IPC) mechanism between IIS-hosted processes and those that are not hosted by IIS. This was indicated by an error in the eventlog similar to the one below:

Type: Error
Source: DCOM
Event ID: 10016
Description:
The application-specific permissions settings do not grant Local Activation permission for the COM Server application with CLSID {CLSID} to the user DomainNameUserName SID {SID}. This security permission can be modified using the Component Services administration tool.

Fixing this issue is rather simple on Windows 2003 and 2008, you just start the  dcomcnfg.exe utility (with elevated privileges) and navigate to the “IIS WAMREG admin service”. In the security tab click Edit in the “Launch and Activation Settings” field. Add the account or group and select “Local Activation” for each of the accounts or groups added then click Ok.

 In Windows 2008 R2 however you will first need to grant your administration account access to be able to change these permissions. By default Windows 2008 R2 has limited access to this dialog through an ACL on a key in the registry which the Trusted Installer is owner of and all other accounts only have read permissions to.

The actual key used by the IIS WAMREG admin service Security dialog is:

HKEY_CLASSES_ROOTAppID{61738644-F196-11D0-9953-00C04FD919C1}

After granting your administration account access to this key you can then start the dcomcnfg.exe and follow the above procedure to add the missing accounts or groups and grant those the “Local Activation” permission.

After changing the permissions for the accounts you must perform an IISRESET on the server.

Remember to prevent this error from occurring again on Servers Hosting SharePoint you will have to add all the Application Pool accounts including the accounts used for the Central Administration (only if central administration is running on this server) and Shared Service Provider Administration web applications.

Firewall

The event 10038 might also occur if a network issue is preventing the Search (Index) server to connect to the Query servers. Since Windows 2008 the Firewall is enabled out of the box, remember to allow all inter-farm and windows server/domain traffic between the servers hosting SharePoint Server.

In the SharePoint business administrating the SharePoint farm often involves also taking care of the databases SharePoint runs on. Even if your organisation employs dedicated DBAs often the limitations of what you can do with the databases and the procedures for backup and restore will have the SharePoint administrators involved at least partly with the creation of procedures for backup and restore and regular maintenance jobs.

For testing purposes, for example performance tests or granular restores with the SQL server backup and restore functions sometimes will involve restoring the SharePoint databases to another SQL Server instance or a different server even. In these scenario’s when you have restored the database, the database users will be orphaned, even if the SQL login names on the new server are exactly the same they will still become orphaned. This is caused due to the fact that the database users are identified with SID’s and when you restore a database to another server or instance these SID’s no longer match with the SID’s of the SQL Server logins in the Master Database.

When you try to reset the permissions for the SQL Server Logins in the database you will recieve an error message and the permissions will not be reset.

Error 15023: User or role <username> already exists in the current database.

This problem can be fixed however by useing  Transact-SQL scripts that can help you identify and fix the problem. To identify what users are orphaned you can use the following script, it will return a list of all the database users that are orphaned.

USE <Database Name>
GO

sp_change_users_login @Action=’Report’
GO

When these users are identified you can match them in the restored database to SQL Server logins with the following script. You can choose any SQL Server Login to match the database user with this method.

USE <Database Name>
GO

sp_change_users_login @Action=’update_one’,
@UserNamePattern='<Database User Name>’,
@LoginName='<SQL Server Login Name>’
GO

As an alternative you can also use the following script if the database users in the restored database are named exactly like the SQL Server Logins you want to link.

EXEC sp_change_users_login ‘Auto_Fix’, ‘<User Name>’
GO

 This will solve the problem of orphaned users and when adding the database to SharePoint will prevent any Access Denied messages in your Web Application from showing up.

 

When working with SharePoint Databases oftentimes the only way SharePoint identifies the database is via a GUID. Sometimes this can be frustrating because you want to know exactly what database you are working with. As far as i know there is no stsadm command available to easily translate GUID to Database name.

Word of warning, the following script directly queries a SharePoint database, this is never a good idea. Therefor i reccomend you restore the database to a restore or test server where you can execute the query as to prevent anypotential  issues with your production environment.

Luckily there is a way to get this information from SQL Server, to do this you must run a query against the Configuration database (See warning above).

First you need to open Management Studio

  1. Select New Query.
  2. Select the Configuration Database from the Available Databases dropdown list.
  3. Copy the script below and paste it into the Query window and press Execute.

Select ID, Name from objects
where properties like
‘%Microsoft.SharePoint.Administration.SPContentDatabase%m_nWarningSiteCount%’

This will output the Database GUID and Database Name in a nice list, giving you the data you need to identify what database SharePoint is talking about when it is talking GUID’s to you.

As stated There is some risk involved in running queries against SharePoint databases, Microsoft advises not to run any query against a SharePoint database as it might interfere with any SharePoint processes running against the databases. When this happens the database will lose support status and if you need to contact Microsoft for support with issues in the future you might be asked to replace the database with a fresh copy. I usually restore a recent backup of the configuration database to a restore server or a test server and run the query there. This allows me to get the data but still comply with the “No query for you!” rule that SharePoint databases have.

 

Not too long ago I posted about how to remove (detatch) content databases safely from the SharePoint Farm. to enable you to move/replace databases between Web Applications and or Database Servers. One of the reasons why it is important to use the preparetomove  command is to prevent problems when you reattach these databases to the farm.

Usually these problems show themselves via the event viewer in the form of event ID 5555 and 7888, to solve these sync issues that will prevent profile information inside the content databases to be updated you can follow these steps.

 First we want to find all databases that are not synced up correctly, to do this we use the command:

stsadm -o sync -listolddatabases <number>

This will give you a  list of database GUID’s and the date/time they were last synchronized. We need the GUID’s to get the databases in sync again. The following command will change all GUID’s in the database as old and the next time the indexer runs it will generate new GUID’s for that content database.

 stsadm -o preparetomove -contentdb <databse server name>:<content database name> -oldcontentdb <GUID>

 where <GUID> is a guid from the content database in the list generated with the listolddatabases command above. Repeat this for every database in the list you want to ‘recover’

Once your next full crawl completes you can run the stsadm -o sync -listolddatabases <number> command again.  And anything still on the list can likely be removed at this point.  You can remove these by running the following command.

stsadm -o sync -deleteolddatabases <number>

This will delete all GUID entries in the SSP for anything that is out of sync for more than <number> of  days.  After running these commands all events in the eventlog should stop to appear in the eventlog.

You can test this by decreasing the sync timing job to 5 minutes and check the eventlog to see if these events are gone, you can do this by running the following command.

stsadm -o sync -synctiming “m:5”

Then reset the sync timing back to default value of 1 hour by running the following command.

stsadm -o sync -synctiming “h:1”

This procedure will solve your out of sync issues you may have in your farm and will ensure that the relation between content database and SSP is optimal. This also means profile information residing inside the content database will be updated again and the crawl should show less errors as well. As i have stated in the previous post you can prevent this problem by using stsadm -o preparetomove before detaching a database.

Last week I posted about the problem with local loopback connections, and showed how to solve them with a registry setting. So i was positively surprised when i checked Gary Lapointe’s blog today and saw he had created a new extension for stsadm.exe. This new extention configures the SharePoint site URL’s allowing local loopback connections to thiose URL’s, it takes a lot of work out of your hands by configuring all URL’s in the backconnectionhostnames key to solve the connection problems.

A small recap of the command from his blog:

C:>stsadm -help gl-setbackconnectionhostnames

stsadm -o gl-setbackconnectionhostnames
Sets the BackConnectionHostNames registry key with the URLs associated with each web application.
Parameters:
        [-updatefarm (update all servers in the farm)]
        [-username <DOMAINuser (must have rights to update the registry on each server)>]
        [-password <password>]

His post can be found here as well as links to his extensions and scripts.

When you encounter errors when you try to access the SharePoint Sites from a Front End locally. This problem occurs because since Windows Server 2003 SP1, Windows includes a new security feature named loopback check functionality. By default, loopback check functionality is turned on in Windows Server 2003 SP1 and Windows 2008.

Ever since that time i have ever only heared people advise me to disable the loopback check entirely by using a registry setting called DisableLoopbackCheck. You can configure this at the following location.

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsa

Create a DWORD called DisableLoopbackCheck and set it’s value to 1

However, this does disable the entire function and thus reduces the security of the affected system. In some cases this might still be a valid way to circumvent the problem but Microsoft is offering a better solution that enable you to set exclusions for hostnames used on the server called BackConnectionHostNames.

The following registry entry is used to configure this

HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlLsaMSV1_0

Create a  multi-sting value BackConnectionHostNames that contains a list of hostnames.

For SharePoint this means that you need to enter the SharePoint sites host headers in this list, alternatively you may want to investigate configuring this via a script and or via Group Policy for manageability.

Note: This solution also solves the problem of connecting SQL Server management Studio to the local Database Server for management.