Category Archives: Guide

SharePoint document templates: A solution to the one-template-per-content-type problem?

One of our customers was upgrading their intranet from SP2010 to 2013, and wanted help improving their document management solution simultaneously. Their current solution already had a lot of document templates, and since we wanted to merge some of the libraries having different templates, we were looking at a situation of having libraries with perhaps 30-50 templates, which in SharePoint means 30-50 content types. Not only would this be a pain to maintain, it also makes it much harder for users to use the templates. And let’s face it, managing document templates in SharePoint is already an awful experience, even with just a few templates to manage. In this case, as in most, the metadata from SharePoint had to be visible in the documents created using the templates as well. That meant having quick parts in the templates with a connection to the fields in the SharePoint library. So, in a conversation with the customer, one of them asked if we couldn’t simply put the templates in Word instead. My immediate reaction was “Hmm… no… i don’t think so. I don’t think the connection to SharePoint will be maintained”. But then I thought about it and figured it just might work. If the template was stored in a SharePoint library, the correct quick parts could perhaps be added and work anyway. So I did some searching and came up with the following solution:

  1. Put the documents that should be used as templates in a document library in SharePoint
  2. Set the Workgroup templates property in MS Word to the address of the SharePoint template library

In other words, new documents aren’t created from the SharePoint library at all. Instead the user can create new documents directly from word, which is an improvement in my opinion (though I would prefer if you could still create documents from SP as well). The Workgroup templates property in Word allows users to point to a folder, making all the documents in that folder appear as custom templates in Word. The problem was that you’re not allowed to set the Workgroup templates property to a web address. I did some searching, and found a few posts on how to work around this. What you need to do is:

  1. Map a network drive to the address of the SharePoint library
  2. Set the Workgroup templates property in MS Word to another network drive (not the one you just created)
  3. Open regedit and modify the property to point to your own network drive

After performing these steps, the files stored in the SharePoint library can be used as templates in word, with working quick parts, and you won’t even need to open SharePoint to create documents anymore. Note: This works in both SharePoint on-premises, and SharePoint Online!

Ok, so how do we do this?

1. Map a network drive to a SharePoint library

There are plenty of examples out there on how to do this. Here is the MS one: https://support.microsoft.com/en-us/kb/2616712?wa=wsignin1.0

  • Make sure the site with the document library is added to your list of Trusted Sites.
  • If using SharePoint Online, log into your tenancy, and make sure to remember the credentials.
  • Right-click Computer and choose “Map network drive…” in the menu.
  • In the Map Network Drive dialog, enter the following. Drive: Choose a Drive to map it to. I like S: as in SharePoint. =) Folder: Paste the URL to the document library
  • Click Finish

Windows explorer should now be opened automatically, showing you the contents of the library. MappedDrive

2. Set the Workgroup templates property in MS Word

The next thing we need to do is set the Workgroup templates property in MS Word. This will make the given location be used as a folder for custom document templates. I am using Word 2013 in this example. But it should work for 2007 and 2010 as well, even if the paths may vary.

  • Open MS Word
  • Go to File –> Options –> Advanced (scroll to the bottom) and click the “File Locations…” button.
  • Modify the Workgroup templates property and set it to a non-web address location. I set it to D: for example (see image) WordWorkgroupTemplates
  • Confirm all the dialogs and close Word.

3. Open regedit and modify the property

  • Open up the registry editor (press the windows key, type “regedit”, press enter)
  • Go to HKEY_CURRENT_USER/Software/Microsoft/Office/15.0/Common/General Note: Replace “15.0” with your current office version.
  • Modify the “SharedTemplates” property and set it to the drive of your SharePoint library, in my case S: regedit
  • Confirm the dialog and close the registry editor.

4. Create a new document using your new template

And now you are done, ready to use the documents stored in the library as templates!

  • Open Word and go to the New screen
  • Click the Custom tab, and you will see a folder named “S:” (or whichever drive you mapped the library to) newDocument
  • Open the S: folder to see all the documents stored in the library. Just click the one you want to use as a template. newDocument2
  • Word will now copy the document from SharePoint and create a new file for you, with SharePoint field references working perfectly. newDocument3

Benefits over regular SharePoint document templates

Easier to update templates. To edit the templates, you just need to edit the documents stored in the template library, rather than edit the template file connected to a content type, saving you a lot time, especially when adding/changing quick parts in the template. Create documents from Word, instead of SharePoint. Word is the program you use to edit documents, so why should I have to go to SharePoint to create a new one? Wouldn’t it be easier if I could create a new document from a template directly in Word? Yes it would. It would simplify the process greatly. Separation between Content Types and Template. Having a 1-to-1 relationship between Content Types and Templates is a system design mistake of epic proportions, and one of the reasons you cannot create a great document management system in SharePoint without customizations. Separating the two is a major win, enabling you to have a great number of templates without adding unnecessary complexity.

Limitations and drawbacks

Single Site Collection. This solution will ONLY WORK ON A SINGLE SITE COLLECTION! The templates will need to be stored on a library on the same site collection were the new documents will be saved. It can be on different web sites, but save it on another site collection and the fields won’t update inside the document, even if the content type is distributed through a content type hub. This means that the users need to know where to save the documents if they should work as expected. This basically means that you cannot have a document management system (DMS) consisting of several site collections, which you shouldn’t want to anyways. But this puts a limit to scalability. You can use archiving to keep your DMS site collection below the recommended levels, but it’s still more limited. There may be a way around this by ensuring that the fields SourceID and internalName attributes are consistant between site collections, but I haven’t tested it yet.   I hope you give this a try. It’s a working (although not great) way of separating Content Types and Templates, which is something MS should have done a looong time ago.

Advertisements

Discovering Web Workers

So I wanted to learn about Web Workers, since they are (as far as I have come to understand) the only real way of running javascript in a separate thread. The following post is my explanation of the concept as I have percieved it at the time of learning.

Note: I wrote this post in the process of learning, meaning I’m not an expert. Don’t take my word as the truth. They’re just my interpretation of what others have said, and my own tests.

Web Workers

Web Workers allows running scripts “in the background”, in a separate thread from that of the user interface, allowing tasks to be performed without disturbing the user experience. Since javascript is a single threaded language, only able to simulate multithreading by using for example yield() or setInterval(), workers might be the only option for running javascript in separate threads. At least as far as I know.

Some things to know about web workers

  • Workers have a high start-up performance cost
  • Each worker instance consume a high amount of memory
  • Workers are not intened to be used in large numbers at the same time

A worker should not be something you call frequently to perform small tasks, since the cost of calling the worker will exceed the benefit of running it in a separate thread.

So when do you use Web Workers then? Well, if you want to run something that has a high performance cost, without interfering with the user experience, web workers can be a viable option. Uses can include for example perform heavy calculations, or having a “listener” for notifications running in the background.

So how do you do it?

Simple example

First of all, I have a simple html page, with a span to post my results, and buttons to start and stop my worker.

<html lang="en">
 <head>
[...]
 </head>

 <body>
 <input type="button" onclick="startWorking();" value="Start working!">
 <input type="button" onclick="stopWorking();" value="Stop working!">
 

 

 Results: <span id="resultSpan">...</span>

 <!-- load scripts at the bottom of your page -->
 <script src="javascript/foreman.js"></script>
 </body>
</html>

Next, I have a javascript file being loaded to the page. This is not my worker, but the script responsible for calling the worker. I call it foreman.js.

The first thing I want to do is to get my resultSpan element to present the results of the workers. I also create a variable for storing my worker object.

var result = document.getElementById("resultSpan");
var worker;

Next I create a function for stopping the worker.

function stopWorking() {
 worker.terminate(); // Tell the worker to stop working.
 worker = undefined; // Fire the worker.
}

Not all browsers support workers, so a function to check for worker support might be a good idea.

function browserSupportsWebWorkers() {
 if(typeof(Worker) !== "undefined") {
 // Yes! Web worker support!
 return true;
 } else {
 // Sorry! No Web Worker support..
 return false;
 }
}

And now to the important parts. We want to be able to call our worker, and create a function for doing just that.

function startWorking() {
// Code goes here.
}

The first thing I do is checking if the browser supports workers. If it doesn’t, I can handle it in different ways. This is good if you don’t want your functionality to break due to compatibility issues.

 if (!browserSupportsWebWorkers()) {
 // What to do if browser doesn't support workers.
 return;
 }

Then I instantiate a new worker object, referencing the worker javascript file. And yes, the worker code needs to be located in a separate file.

 worker = new Worker("javascript/worker.js"); // Create a new worker object.

// Code goes here.
}

Now when we have the worker object, we need to define what will happen when we get a response from it.  Communication between the foreman and worker will be passed through messages, and we need to declare what to do with those messages. The code below shows 2 ways of doing the same thing.

 worker.onmessage = function(event) { // Tell the foreman what to do when the worker response comes.
   result.innerHTML = event.data;
 };

 // This is another way of doing the same thing.
 worker.addEventListener("message", function (event) {
   results.innerHTML = event.data;
 }, false);

In the code above, we declare that when we recieve a message from the worker, we will get that message and show it in our resultSpan by setting its innerHTML.

We may also want to handle what happens if an error occurs. In addition to the .onmessage event, we can declare the .onerror event for just this reason.

worker.onerror = function(event) { // Tell the foreman what to do when the worker fails.
   result.innerHTML = "Error: " + event.message;
 };

The last thing is to call the worker. This is done by calling the postMessage function of the worker object.

worker.postMessage(); // Tell the worker to start working.

What will happen now is that the worker javascript will be loaded and executed. The results will depend on what we put in the worker.js file. All we have done in foreman.js is to say that we will present the results of the worker. So let’s take a look at the actual worker: worker.js.

var i = 0;

function timedCount() {
 console.log("Worker says: Counting to " + i);
 i = i + 1;
 postMessage(i);
 setTimeout("timedCount()",500);
}

timedCount();

In this simple example, all I want to do is to illustrate a continuous process being run in the background. The worker runs a recursive function every 500 milliseconds and sends the response back to the foreman. The message is being passed by calling the postMessage function. The object put as a parameter in postMessage will be available in event.data in the foreman script. In this case it’s an integer, but it could just as well be a string or JSON object.

Calling specific worker functions

You cannot call a specific function within a worker directly. When the worker is called, it simply runs the file. However, you can implement your own handling by passing function names as parameters.

In my next example, I have another worker file, skilledWorker.js. It contains three functions. I choose to store these in an object called actions, and you will see why later. This is not required however, and there are many ways of implementing support for calling certain functions.


var actions = {};
actions.count = function (parameters) {
  // Unpack parameters.
  var number = parameters;

  setTimeout(function () { // Call setTimeout to run function each 1000 ms.
    postMessage(number); // Message the foreman of the current number.
    number++; // Increment number.
    actions.count(number); // Recursively call the same function to increment number with each call.
  },100);
}

actions.calculate = function (parameters) {
  // Unpack parameters.
  var a = parameters.a;
  var b = parameters.b;

  var results = a + b;
  postMessage(results);
}

actions.read = function (parameters) {
  // Unpack parameters.
  var results = parameters.text;
  postMessage(results);
}

Next, I need to declare what will happen when my worker receives a message.


self.onmessage = function (event) {
  handleMessage(event);
}

This says that I should call the function handleMessage and pass my event whenever a message is received. All that’s left is implementing the handleMessage function.

function handleMessage(event) {
    var command = event.data.command;
    var parameters = event.data.parameters;
    var action = actions[command];
   
    if (action) {
        action(parameters);
    }
    else {
        postMessage("Unknown command");
    }
}

What happens here is that we retrieve the event.data, and get two properties from it, command and parameters. These has to be passed when calling the worker, and I will show how in a bit.

The next piece of code I got a little help from my friend and colleague Anatoly Mironov who has one of the best SharePoint blogs out there.

Since we store our functions in the object called actions, calling “actions[command]” will return the function matching the command string. If no functions matches, the value will simply be undefined. The simple if-statement allows you to check and handle what happens when trying to call a function that doesn’t exist.

The last thing we need to do is to call the worker passing the correct command and parameters.

Passing parameters

Calling a worker with parameters is still done with postMessage(). You can pass either a string or a JSON object. This example will demonstrate how to pass a JSON object. Passed parameters will be available in the worker in event.data. As you saw above, our workers handleMessage function needed the data to contain .command and .parameters. So when calling postMessage() we simply input a JSON object containing these two values.


worker.postMessage( { 'command': 'count', 'parameters': 1 } ); // Tell the worker to start counting.

worker.postMessage( { 'command': 'calculate', 'parameters': {'a': 5, 'b': 10 } } ); // Tell the worker to calculate.

worker.postMessage( { 'command': 'read', 'parameters': {'text': 'This is text the worker is supposed to read.'} } ); // Tell the worker to read the text.

In the code above, each  line will call the worker, but running different functions, which in turn use different parameters.

In conclusion

Web workers are not very difficult to work with once you understand how they work, and while their use might be limited due to the heavy initial performance cost, being able to running a background thread for large tasks can be quite powerful.

If you want to check the full code I have a GitHub repository for it here: https://github.com/Johesmil/webworkers.

If you want to learn more about Web Workers from people who actually know what they’re talking about, check out the links below. =)

Sources

Web Workers

http://www.htmlgoodies.com/html5/tutorials/introducing-html-5-web-workers-bringing-multi-threading-to-javascript.html
http://www.w3schools.com/html/html5_webworkers.asp
https://developer.mozilla.org/en-US/docs/Web/Guide/Performance/Using_web_workers
http://anders.janmyr.com/2013/02/web-workers.html
http://www.html5rocks.com/en/tutorials/workers/basics/

Export SharePoint list data to XML directly from the GUI

The other day I learned of a cool function in SharePoint which can come in handy if you want to export a list to XML. And best of all, no code, script or SharePoint Destroyer… *cough* … Designer needed. What you do is simply to call an OOTB SharePoint service and specify in the query string what it is you want, and in which format:

http://<site url>/_vti_bin/owssvr.dll?Cmd=Display&List=<list guid>&View=<view guid>&Query=*&XMLDATA=TRUE

So what you do is to call the owssvr.dll from the site you want to export from, and in the query string add Cmd=Display. Then you add the List and View you want to export from. If you want all items and fields you simply set Query=*. Mind, you still might have to reference a view, even though it won’t be used when using the query. And in the end, add XMLDATA=TRUE. That’s it! An example of how it might look:

http://myawesomesite/_vti_bin/owssvr.dll?Cmd=Display&List={002A6DE2-7638-4FEF-A7CD-7427D4DECABA}&View={757d5548-eafc-4a5f-8ef4-e0be36d790a3}&Query=*&XMLDATA=TRUE

You can get the guid to the list by simply going to the list settings and copy the guid from the url. Its the guid after “…&List=”. That’s it. =) Some documentation about it and other SharePoint services: http://msdn.microsoft.com/en-us/library/dd588689(v=office.11).aspx

Automatic minifying of CSS (and LESS) and javascript using Web Essentials 2013 in Visual Studio 2013

We just recently started upgrading to Visual Studio 2013 in the project I’m currently working on, and with VS 2013 comes Web Essentials 2013, an extension that’s truly essential for web development.

Now, I like to use the LESS framework when writing CSS, and have been using Web Essentials 2012 for some time. One of the nice things about LESS and Web Essentials 2012 was that it automatically generates a minified version of the CSS file for you, and that’s pretty sweet.

Now, one of the first thing we noticed in VS 2013, was that modifying and saving our LESS files no longer generated a minified version of that file.

No mini

At first we thought it might be a bug, but when exploring the toolbar menu for Web Essentials (new to 2013), we found an interresting button:

Toolbar menu

Pressing this created a settings file and added it to the solution. In this file we found a number of awesome stuff. For example, you could turn the automatic generation of CSS files on and off. And even better, there was even an option to do the same for our javascript files!

Settings

Now we were getting our minified CSS files just like before, and also having the same behavior for our javascripts!

Yes mini

Before, we used another VS plugin for generating our minified javascripts, but now we no longer need to. Everything is taken care of using Web Essentials 2013, and modifying the settings file.

Perhaps the best thing about the created settings file is that it is automatically added as a solution file, and picked up by the source controller. So once configurated, we can just check in the file and let everyone in the team get the correct behaviour automatically.

Now, I may be ignorant of what was possible in 2012. Perhaps this settings file was available, and perhaps they had support for minifying javascripts. But since Web Essentials are now more visible than before (having its own toolbar menu), finding these features was easier, and took only a few minutes to figure out, without googling for help or reading any Product Updates info. And to me that’s pretty sweet! =)

Increase disk space for a VMWare virtual machine

I had a virtual machine which had about 40gig of hard drive space, and I needed to add another 20. At first I thought this would be supported by VMPlayer, but as I found out, it wasn’t. If you have VMWorkstation you will have access to a tool called vmware-vdiskmanager which is able to do just that, but since I didn’t, I had to find another solution. Thankfully, one of my colleagues had a great solution using Ubuntu.

Yes Ubuntu is an operating system. NO YOU DON’T HAVE TO INSTALL ANYTHING, not even Ubuntu! I know it seems like there are a lot of steps, but that’s just because i’ve broken them down into very small ones to make it easier. It is really not a lot of work.

NOTE!
I personally didn’t run into any issues when doing this, and my virtual machine worked fine afterwards. But since it is a delicate operation, you should really make a copy of your virtual machine before trying this.

Firstly, you have to expand the amount of disk space the virtual machine is allowed to use. This is done directly in VMPlayer when the virtual machine is shut down.

  1. Open VMPlayer
  2. Click on the virtual machine who’s hard drive you want to expand.
  3. Click on Edit virtual machine settings
  4. On the hardware tab, click on the Hard Disk in the device list. On the right hand site a Utilities drop down will appear, click it and select Expand.

NOTE:
One might think that this should be enough, and that the disk space of your virtual machine would magically expand to the set amount, but this is not the way it is. It’s not strange really if you consider a virtual machines hard drive being composed by partitions, just like a regular computer. When you expand the disk space, you simply give the virtual machine more disk space, you don’t increase the space for already existing partitions, which is not strange at all. To do this, as I said earlier, you need another tool. I’m sure there are plenty of working tools out there, and some may be easier to work with than this, but for me doing it with Ubuntu worked great.

 The second thing you need to do is increasing the space for the already existing hard drive partition within your virtual machine. You can do this by following these steps:

  1. Download Ubuntu and save the iso-file to the hard drive on your regular computer (not the virtual machine).
  2. Start your virtual machine. From the top menu bar, click Virtual Machine -> Removable Devices -> CD/DVD(IDE) -> Settings. This will open a settings dialog.
  3. On the hardware tab, click CD/DVD (IDE) in the device list.
  4. Under the Connection area, click Use ISO image file and then browse. Browse to the location of the Ubuntu ISO and open it.
  5. When the Ubuntu ISO is selected, just click OK to save and close the Settings dialog. The ISO should now be mounted and readable as a DVD from within the virtual machine.NOTE:
    Ubuntu let’s you start a demo version of the OS without installing anything. All you need to do is boot the machine using the mounted ISO. This is quite amazing if you ask me. But then again I’m not very proficient with these kinds of things.Now you need to restart your virtual machine and boot it from the mounted image. This can be a bit tricky, because you need to change the settings in the BIOS, and you hardly have any time at all to enter bios during the booting sequence.
  6. Restart the virtual machine. When it says Shutting Down, start spamming F2 (press the button repeatedly). You have like a 0,5 second window during which you have to press the button, so just pressing it worked best for me.
  7. Once you’ve entered the BIOS, go to the Boot tab and change the order of the devices so the CD-ROM Drive is positioned above the Hard Drive.
  8. Save and exit the bios (F10).The virtual machine should now boot using the mounted image, and you should come straight into the Ubuntu operating system.Note
    I had a bit of trouble while inside Ubuntu. The graphics of my mouse pointer did not appear in the same location as the actual mouse pointer, so I had a hard time clicking the right buttons. This was manageable using keyboard commands and clicking with the mouse randomly to see it’s current position.
  9. When the first screen appears, choose Try Ubuntu, don’t install it.
  10. From within Ubuntu, press the windows key on your keyboard to open a search panel. Type “gparted” and an icon for the GParted Partition Editor will appear. Press the arrow down key until you’ve selected the GParted icon and then press enter.
  11. You should now see a list of Hard drive partitions and a graphical representation of these. You can increase the size of the partition by selecting it in the list (a bit tricky if you have the mouse problem explained earlier), click the partition tab in the menubar, and then press Resize/Move.
  12.  Increase the value of the New Size to the Maximum size and press Resize/Move. A new pending operation should be created and displayed at the bottom of the screen. Click
  13. Click the Apply all operations button from the Edit tab in the menu bar.
  14. Wait until completed.
  15. To restart the computer click the windows key. Just type “terminal”, press the  arrow down key until you’ve selected the terminal icon and press enter.
  16. In the terminal window, type “sudo reboot” and press enter. Don’t forget to enter bios again when the machine restarts by pressing F2, and changing the bios settings to boot using the hard drive.

And that should be it. Worked for me at least.

Get Ubuntu here. It is a free open source operating system.

If you have VMWorkstation you should already have a tool for doing this. You can read how to here.

How to connect the Chart WebPart to an excel document

This is how to connect a Chart WebPart to collect data from an excel document.

Since I did this in swedish, some of the names I give buttons in this might not be the same, but should at least be enough to figure out what to press.

  1. You need to activate Excel services on the server.
  2. Activate the feature in site collection features.
  3. Add a Chart WebPart to any page.
  4. Click the Data and appearance button
  5. Click Connect to data button
  6. Choose Excel Services and click next
  7. Fill out the path to the excel document and the interval string to select the data needed
    For example:
    Path: http://MySite/Lists/MyDocumentLibrary/NameOfMyExcelDocument.xlsx
    Interval: Sheet1!$A$1:$C$7 (This would select data from Sheet1, and from cells A1 to C7)
    Click next
  8. Choose filters if needed. Click next
  9. Choose which fields to use in the chart and press finish
  10. Once again, click Data and appearance button, but this time press the Adjust diagram link
  11. Choose your prefered diagram type and press finish

Updates in the source file will automatically display in the web part once page is reloaded.

For a more extensive guide, check out this resource which was a great help to me: