Guesstimation

One of my biggest bug-bears in development is having to come up with arbitrary methods of scoring.

For instance, I’ve got to work on an assessment system where there are three levels of skills based on roles, and three skill levels for the relevant tasks. For example, a question such as “I can touch-type” would have three levels, such as “Can’t at all”, “can but not properly” and “Fully qualified touch-typist”, but touch-typing itself would be a skill level of 2 (don’t ask me, I don’t know where the levels came from). Once the tests have been done, I’ve got to come up with some completely arbitrary method of taking the scores from those role-based skill levels, and from the skill level in each role of the user, and saying “User [x] is at skill level [y] and thus needs/doesn’t need extra training”.

But the actual formula for working out the points is being left to me – no-one else can get their head round it, so it’s been left to the techie. And I think I’ve got something that works, and weights things properly in favour of the higher skills, but I don’t really know. Even worse, I suspect we won’t really know whether I’ve got it right until the entire system has been fully tested and used – by which time it will (in theory at least) be far too late to do anything about it.

It’s enough to drive me crackers.


One Comment on “Guesstimation”

  1. Alan Robinson says:

    I have been involved in training, testing and evaluation for 15 years, assessments must be made objectively.

    In your own field, you are a subject matter expert, as such, you could interview someone and establish their ability in your own field of expertise.

    To do so objectively, you would need to to compile a set of required objectives, these will be theory and practical, then decide on what level of mastery is acceptable.

    A written test would follow the interview, then a performance test, this provides a permanent record of the evaluation.

    How are you expected to assess (objectively) an instrument technician, fabricator, supervisor etc? It’s not possible for one person to do this, objective evaluation requires an SME for each job/skill.

    You need to need to poll all the relevant department heads to establish the job types they have working for them, and also establish what level of expertise they are seeking (sytem((s)), materials, equipment and tools used).

    Once you have the feedback about the various job types, get back to the various heads requesting an SME for each.

    The SMEs do the work of writing the objectives, compiling test questions and practical tests.

    When you have the feedback from the SMEs, you can start to develop a suitable data bank. When done, fire this back up to the heads and SMEs for approval, corrections and/or additions. You now have the relevant information to compile a database for objective written and practical tests.

    When working overseas, there were times when I would have to evaluate someone with 20 to 30 years field experience, in order to establish where they fit in to the company matrix. This would involve 2 to 5 days one to one. The majority were top notch, but were weak in theory, all of them dreaded this assessment.

    The majority received promotions, some to supervisory, some I sent back to apprentice level (still on the same wage). This was covered by the objectives, signed off by the managers and SMEs. An object lesson in CYA (cover your arse).

    I started off using DB V, in the latter years I was using LXR Test 6.02, a clunky interface but a powerful programme.


Leave a Reply

Your email address will not be published. Required fields are marked *