Archive of Technology posts

 
 

What do we want?

At TCUK12 this year, I chatted with several people about authoring tools. Vendors, other technical writers, managers, I asked the same two questions, again and again.

What authoring application do you use, and why do you use it?

The answers were illuminating, interesting and always useful. There are many, many options out there, catering to many different needs, and all of them have a different set of strengths and weaknesses. Alas, no matter how hard I tried, regardless of how many ways I tried to bend our requirements, all of those conversations led me to the same conclusion.

No-one out there builds what we want so we may have to build it ourselves.

As part of improvements to our content, one of my team has led the charge to restructure our information. She has a passion for information architecture and devised a three pronged approach to our content. You can either navigate in by role, by product area or… by something else we haven’t yet decided upon.

We’ve audited the topics we have and applied some simple structuring decisions and it is looking good so far. The problem we will soon have is that we will need to build this new structure and make it usable by our customers.

What we would like is to be able to tag our topics, and use those tags to present a default structure to our information. The tags would also allow users to filter the topics they see and, either by addition or subtraction, create a unique set of information for their needs. Ultimately this would lead to personalisation of content in a basic form, but that could easily be enhanced to provide a smarter take on content for each user.

Alas it seems that, without doing a lot of customising of an XML (most likely DITA) based system we won’t get near that and even the products that get close require a compromise somewhere. Most of the time it would be, for the team of writers, a step back to a less friendly interface, and more exposure to the underlying technology of the tool they are using. At present Author-it provides a simple authoring environment that allows the writers to concentrate on writing content.

But perhaps that is the point. Maybe it’s time to try a different set of tools, adopt new working practices, take on a the bigger challenge.

Further Webhelp hacking

I mentioned in my previous post that we run a webhelp build of our content (a.k.a. our Knowledge Centre) on our developer community website, and that it was hosted in an iframe. I thought it worthwhile fleshing out the detail of that as it includes a bit of custom code some others might find useful.

As our content is locked behind a login, we need to be sure that only people who are logged in can access it. This is achieved by a couple of simple checks.

1. When the Knowledge Centre is loaded, a script runs that checks it has been loaded within the correct iFrame within our website. If it’s not, the user is redirected to the login page.

The javascript for this is added to the webhelp.js file (around line 106):

//———– init function ————
Kbase.init = function() {

//OUR redirect
if(window.top.location==window.location) {
window.top.location = ‘URLTOYOURIFRAME’;
}

2. If the Knowledge Centre has been loaded in the correct iFrame (in other words the above javascript is happy), the website checks for a cookie (checking for login) and then either loads the Knowledge Centre, or, again, redirects the user to the login page. The javascript for this is standard cookie checking stuff (google will find you a zillion solutions).

And that’s it. Nothing particularly clever, but a useful way to (lightly) protect the content of our Knowledge Centre.

On Google Wave

I think I’m starting to get it. I’ve used it a couple of times but not for any other reason than to play with it, but now I have an actual need for a place to collaborate with a group of geographically displaced people, the ISTC Community website, it’s starting to make sense.

And I’m not the only person that thinks Google Wave is best suited to this kind of collaboration.

I’ve realised that what Google have done is take the best bits from a couple of different communication channels, combine them and add a couple of improvements.

Those channels are email and Wiki, with a hint of instant messaging thrown in for good measure.

The easiest comparison is to email, with threaded conversations the main thrust of a Wave, but as you can edit and ‘interrupt’ any part of an existing message, with that edit viewable to everyone else on the wave, soon you begin to realise that it’s more like a message based Wiki.

The ability to see new messages in real-time adds in a type of instant messaging but I think the value stands in the staggered, traceable, timelined edits of messages. For a collaborative, group project workspace this is wonderful.

I’m still learning Google Wave and as it’s still being developed there are a few quirks and annoyances to be overcome but despite those, so far, they are far outweighed by the benefits.

There are other use cases of Google Wave in action, and if you are interested, I do have a small number of invites left.

Analyse this

Let me tell you a story. In it our hero (me) fights valiantly against two Javascript dragons called Webhelp and Google Analytics. It’s a bloody battle and at the end, when all the fighting is done, well … you’ll have to read on and find out.

Some background first.

We have a developer community website which hosts downloads of our software and all the documentation in PDF format. To make it easier for people to find information in the product documentation, we also host a Webhelp version of each and every document in one master Webhelp system so you can search across the entire thing. It works really well.

To track how the other areas of the website are used, we have a Google Analytics account and the necessary code has been added. For the Webhelp, the code is in both the index.htm and topic.htm files.

But, and this is where the story begins, it doesn’t work properly.

Google Analytics will happily track every visit to the WebHelp system, but it stops there. Any click made within the system is recorded as a click but there is no detail on WHAT topic was viewed. We had hoped to get stats on this to allow us to better focus on the areas of the product people were enquiring about but we are, essentially, blind.

It’s very annoying.

Why is this so? Well I think it’s to do with the way WebHelp is created. It uses a Javascript library called Ext JS which, amongst other things, means that every time you open a topic in the Webhelp, it’s loaded through a Javascript call so Google Analytics never ‘sees’ a new HTML page (a new topic) being loaded so doesn’t know what you are viewing.

I think. I’m not 100% sure to be honest.

I’ve logged a somewhat vague Support call with Author-it, and have enlisted the help of our own webmaster. Next step will be to beg and plead with some of the developers for some of their brain power (most of them have a fair bit to spare).

It’s hugely annoying, being so close to what we want but not able to fix it myself, but sometimes you just have to admit defeat.

Of the battle, that is. I WILL win the war!

Paper based

I am a paper junkie. I’m a whore for a nice caliper of paper, not too thick as to be card, not too thin as to be unsubstantial. I love the feel of paper, the rustle and rigidity that give way with a subtle movement. I love the sound of ink being laid down, the gentle drag as my hand loops and dots across the page.

Despite all the advances of modern technology, I don’t see this changing. In fact I’m such a slave to this way of thinking that I’ll often print off an email if it contains important information that I’ll need at some point in the next day or so.

As such I walk around with a notebook (A4 size, hard bound, company branded) stuffed with ‘important’ sheets of information, with said sheets usually adorn with numerous, equally important, scribbles and notes.

And of course there in lies the problem. As of yet computers cannot match the speed nor convenience of pen/pencil and paper.

It is then a short leap and a step to full on stationery porn. Lusting over Moleskin notepads, gushing over the smooth flow of ink from a Mont Blanc. I’m not quite there yet. Yet.

But what of paper in our profession? The last time I was involved with a print house was over 10 years ago (blimey), and these days whilst we still produce user manuals, they are in the now ubiquitous PDF format. Information these days is largely thought of in electronic terms, yet everyone I know prefers reading novels in ‘old fashioned’ print format.

And I guess that is the problem, whilst the main thing we consume and produce is electronically focussed, many of us are still looking to paper as the medium. Which, if you are a paper junkie like me, is a good and a bad thing.

But mostly bad.

No Kahuna

A few weeks ago I mentioned that we were looking for a new way track our tasks. After checking out a few different applications and web applications, I think we have a solution.

The problem we have is that, whilst the bulk of the work is scheduled against a project plan, there are a myriad of smaller tasks and documentation changes that we need to track. These come in through various channels, our Support team, our ‘Core’ team (who maintain the latest stream of the product), and through our team inbox.

Previously we mirrored the development teams approach and used index cards and a BIG whiteboard but it wasn’t really working for us for a variety of reasons. So I spent a couple of days downloading task tracking applications, and hunting for a web-based application that might meet our needs.

There are many out there and the first thing I realised is that most of the are aimed at the project management set and are very date driven. Most of the tasks we wanted to track aren’t heavily date driven, and so are picked up as and when the team has a some spare time in the project plan.

One of the first applications I found was TeamWorkPM which seemed to fit our needs and then some. However it was still quite over-spec’d for what we had in mind so when I stumbled over No Kahuna it was soon apparent that I’d found a good match.

Importantly, No Kahuna is a task tracking application. Dates do not feature. You simply create a project, add project members, then start creating tasks. You can assign a task to a specific project member (or take it for yourself) and when it’s done, it’s marked as completed.

You can add comments to tasks, which is useful when some tasks may sit in the list for a while so you can build out a level of information for when they are finally actioned.

All very simple, it worked well enough in our short trial that I’m happy to shell out $7 a month to get a private project (not visible to the public). If you are looking for an online, lightweight task tracker, check out No Kahuna.

The black art of estimation

How long is a piece of string?

It’s a common question and one I’ve occasionally used in reply when asked “We are building this new thing. How long will it take to provide some documentation for it?”

Estimating the amount of time it takes to write documentation is tricky as it relies on many differing, subtle, factors and, for many people working outside of a highly regimented and heavily project managed team, it tends to boil down to a mixture of guesswork and experience.

However, it’s not impossible to come up with a more reasoned estimate as long as you don’t mind doing a little planning. Although, to be frank, if you aren’t planning your work, you can probably stop reading now and go find a copy of Information Development: Managing Your Documentation Projects, Portfolio, and People.

So in the spirit of sharing, I thought I’d cover a planning aid I’ve used in the past and have, very recently, uncovered again. It’s focussed on topic based writing, so can be used whether you are single sourcing your content, or not but I should caveat that it was created with single sourcing your content in mind. This is based on a system I’ve seen elsewhere, alas I can’t recall the original source, if you know where this comes from, please let me know. I’ve adapted it for my own needs but happy to credit the original author (Hackos?).

The idea is simple enough. You break down your planned content into topics, with a topic defined as a discrete amount of information that shouldn’t take more than a couple of hours to write. Then, when you add in time for review and rewrite, you can take an educated guess at how long an ‘average topic’ takes to complete. So, for discussions sake, let’s say an average topic* takes around 5 hours to complete. Nothing revolutionary so far.

Each topic is then scored against four criteria, with the scoring used to add/subtract an appropriate level of variance:

  1. Difficulty of Topic – do you know what you are writing about or is it brand new? Is it a simple topic or something complex?
  2. Scope of Topic – Does the difficulty dictate that a lot of content is needed? Or is it a short topic of fixed content?
  3. Availability of Information – are you updating an existing document? Do you have a specification to work from? Or are you having to write from scratch?
  4. Access to SME – do you have good access to a Subject Matter Expert? Do you have limited access only? Or none at all?

Each topic is scored, from 1 (long, hard, complex) to 5 (short, easy, simple), against each criteria. An ‘average topic’ would score 3 for each criteria and won’t affect the estimate from the standard 5 hours. Scoring the topics this way allows you to factor in a level of variance, so a difficult topic with a large scope which has no information available and for which you have no access to an SME, will score lowest marks (all critera score 1) and has the highest level of variance from your standard topic estimate.

The criteria are fairly high level and you could certainly expand on these for a more granular approach but I’ve found that most issues can be assigned to one of the above criteria and that keeps the estimation as simple as possible.

The variance can then be calculated (again with an estimated time) so that you can adjust the time it takes to complete the topic, for example:

  1. Score 1 – variance of +2hrs per criteria
  2. Score 2 – variance of +1hr per criteria
  3. Score 3 – zero variance
  4. Score 4 – variance of -30mins per criteria
  5. Score 5 – variance of -1hr per criteria

The figures given above are, also, estimated. You’ll note that the higher scored (and therefore lower variance) topics don’t ‘gain’ you proportionately the same amount as you lose to the lower scored (higher variance) topics. The reality is that, no matter how simple the topic, they still take time to create (increasing the gain numbers could result in a topic taking less than zero time to create!).

So a long, complex and hard topic, with little to no information and no available expert will will score 1 across the four criteria and so add 8 hours (2hrs per criteria) to the estimated completion time for that topic, taking the estimated total for that topic to 13 hours.

Flip the example round, a short, simple topic which comes with sufficient supporting information and an SME sitting on your desk to help you write it. That topic would score 5 for each criteria, and gain you 4 hours, meaning the estimated total for that topic would drop to 1 hour.

Now, the obvious thing to do would be to create a spreadsheet for all of this, that allows you to simple add in your topics, score them against the criteria and calculate the total estimated time (and whilst it’s at it, it can add in a level of contingency too). Which is exactly what I did.

Download the estimation spreadsheet (6KB ZIP file, contains Excel Spreadsheet)

The spreadsheet is annotated to help you understand it, and includes two additional columns which let us track when a topic was added to the spreadsheet (either as part of the initial planning, identified during the review cycle, or because of a change in product scope). All of the calculations are basic arithmetic so feel free to have a poke around and try this out.

It’s not an exact system, but that’s why they are called estimates and if nothing else it helps my team plan what they are writing about which, sometimes, is more valuable than the estimates themselves.

* this is probably the most contentious part of this method and may take some refinement to arrive at a workable number.