TASK INVENTORY DEVELOPMENT
TEXAS COMMISSION ON LAW ENFORCEMENT
OFFICER STANDARDS AND EDUCATION
Task Inventory Development Manual
Special Acknowledgment should be given to :
Dr. Craig Campbell
Texas Commission on Law Enforcement Officer Standards and Education
Dr. Brian Graham-Moore
University of Texas
Ms. Eileen Lynch
Texas Commission on Law Enforcement Officer Standards and Education
Dr. Ivan Messer
Texas Commission on Law Enforcement Officer Standards and Education
Mr. Jeff Olbrich
Texas Commission on Law Enforcement Officer Standards and Education
Ms. Jayne Tune
Texas Commission on Law Enforcement Officer Standards and Education
TABLE OF CONTENTS
A. Definition of the Occupation Survey Instrument
C. Conducting Technical Conferences
D. Overview of the Sequential Steps of Inventory Development
E. Structuring Source Information into Duty-Task Lists
F. Assigning Tasks to Duties
II. Consulting with Functional Representatives
III. Developing and Updating the Task Inventory
A. Properties of Tasks
B. What is a Task Statement?
C. Purpose of the Task Statement
D. Task Writing Rules
F. Extracting Task Statements
H. Editing and Screening Task Statements
I. Building the System, Equipment, and Tools List
J. Initial Field Visits
K. Final Equipment List
L. Writing the Project Summary
M. Survey Expectations
III. Summary and Conclusions
A. Definition of the Occupational Survey Instrument
Job analysis is concerned with the collection, processing, and interpretation of job content data, e.g. tasks, duties, and supporting knowledge. The fundamental building block of an occupational survey is the instrument which elicits these data from the respondent. Developing this instrument for the Texas Commission on Law Enforcement Officer Standards and Education (Commission) is a full-time concern. Since this instrument is a comprehensive listing of the significant work performed at the task level, it is called a task inventory. In part, this is a misnomer because the collection of job-person data with this "inventory" is generally divided into three categories. These are:
Task statements are carefully worded phrases that represent actual job behaviors. The task statements are carefully worded because they are used to help the job incumbent recall whether he or she performs the task. The closer this wording is to the behavior the easier and more accurate the recall (and, the better the measurement properties). This manual will explain how to construct and validate task statements to meet the goals of the Commission in the area of training and certification.
The background questions of the Commission inventory collect data which aid in identifying the job incumbent quantitatively and qualitatively. This information may include:
Qualitative background questions depend heavily on the needs of the end user. For example, academies may wish to know the quality of their instruction as perceived by the trainee. Researchers may wish to assess the relationship between job satisfaction and utilization of talent with current job assignments. These measurements of qualitative variables can aid the Commission in its acquisition of knowledge for training and licensing.
This manual will explain how to select areas for background question development and suggest which formats that background questions should have. There is little question that this area of inventory development has grown in complexity and emphasis each year because of its usefulness. By being observant and sensitive to the needs of the end users, the inventory developer shapes the content of the background section to capture needed information.
The equipment and materials list that accompanies an inventory is a part of the background questions section. However, it should be conceptualized separately because it may greatly define the job content. Of the dimensions that underlie the performance of job-tasks, use of equipment is a clear delineator of what people do. It may also show the relative level of skill. The equipment list for the police career field may define much of its technology when analysis is completed. By carefully constructing this list, the number of task statements may be reduced and many other objectives of job analysis may be achieved. The distribution of who uses what equipment assists the task analysis phase. The occupational analyst can compare this information on equipment and materials with reported task performance.
While not limited to the following, some of the objectives of the occupational survey could be to:
1. improve the accuracy, completeness, and currency of State-mandated prepatory training and continuing education,
2. maintain currency of the quality Training Standards which influences all specialty training,
3. provide information to support personnel research on aptitude requirements, job satisfaction, strength and stamina and to
4. provide information to trainers concerning the percent performing, difficulty, and training emphasis of tasks to help structure and refine training.
None of these objectives could be met if the Commission inventory falls short of the principles discussed in this manual. The reason is simply this: the greatest strength of career field surveys lies in the fact that the individual closest to the work performed is providing the descriptive information about the job. The greatest weakness is the inherent individual variation of people and how they might characterize their work. For example, when asked to write narrative job descriptions, 10 incumbents of similar jobs may write 10 dissimilar narrative descriptions. However, when the same 10 incumbents fill out an inventory, the task level information will indicate that they perform similar jobs. Therefore, the greatest weakness shifts away from individual variations in job descriptions to the structure of the inventory. This structure aids the incumbent in recalling the tasks of his or her job. The stimulus of the task statements must be clear, unambiguous, specific, and psychologically real. In fact, as a starting point of job measurement, the task statement refers to a segment of work performed. However, these task segments must be comprehensive and mutually exclusive. Yet, even in an exhaustive task list, insignificant tasks will be deleted because tasks should meet certain standards.
The U. S. Air Force research has defined these properties of the task inventory
l. Accuracy of measurement. The smaller the unit of description (i.e. task), the more stable the description tends to be in job analysis. This level of measurement permits high reliability.
2. Comprehensiveness. The task inventory attempts to capture all significant tasks in a career ladder, yet each job incumbent may add any tasks not captured by the inventory developer simply by writing them in.
3. Quantifiability. Data collected can be scaled on an unambiguous, reliable, and valid dimension, such as task frequency.
4. Mutual exclusivity. Each task is complete and stands alone. Each task describes a unique behavior of work. This property makes meaningful clustering possible.
It is estimated that at least 750 occupational surveys have been conducted involving over one million Air Force personnel. This means the 750 occupational survey instruments were developed from scratch or updated to achieve the collection of data so that the objectives of job analysis could be achieved. The methodology of the occupational survey is, to a very great extent, a development of Air Force research. This manual builds on the highly useful procedural guide of Morsh and Archer (1967), but expands on this base of information to reflect the latest development and refinement of inventory construction since its publication. In addition, this manual reports on methods developed outside the Air Force and adapted for the Commission.
While the task checklist has been in existence since before World War II (Lytle, l946, 133), its development by the Air Force began in 1954. At that time, Air Force personnel researchers opted for a quantitative, survey-oriented job analysis system. It soon became clear that the task inventory was the most reliable and valid method of collecting job data for their purposes. The actual format, scaling, and supporting documentation on how to systematically carry out this methodology has been advanced by Air Force research up to the present.
During the early years of Air Force research, the assessment of which work activity statements elicited reliable responses was made (McCormick and Tombrink, 1960). Comparisons to other job analysis systems were made and the task inventory held the most promise (Morsh, Madden and Christal, 1961). Field experience was collected and reported in order to provide documentation on how to consistently offer a quality task inventory (Morsh and Archer, 1967; Archer and Fruchter, 1973). These reports, along with Cragun and McCormick (1967) and that of Morsh, Madden, and Christal (1974) emphasize research findings and collected field experience which indicate how properly constructed task inventories are reliable, valid data collection instruments.
The emphasis above on "properly constructed" is due to the fundamental measurement properties of the task. Early Air Force decisions to achieve a quantifiable, survey-oriented job analysis system directed researchers to the measurement properties of job analysis components. Exhibit I illustrates these levels rather well. The trade-offs for levels of precise measurement are also indicated in Exhibit I. For example, while element level (sub-tasks) measurements are more precise than tasks, data collection is difficult when compared to the task level. This is because workers often refer to their work in tasks and not steps or components of tasks.
Job Analysis Levels of Measurement
By Descriptive Power, Ease of Data Collection, and Cost
Ease of Data
Also, too much or too little information affects the job incumbents ability to accurately recall the content of work. Task level referents work best (McCormick and Ammerman, 1960). Since cost of any system is an important factor, elements and therbligs require a staff with specialized training , such as industrial engineering, in order to observe and make these measurements. Since such a method would no longer be survey oriented, the cost goes up considerably while generalizability of results suffers because fewer positions will be analyzed. Conversely, Exhibit I shows that position and level descriptions would be inexpensive and easy to collect, yet the descriptive content would be poor, as would the quantifiability. The net result of Air Force research during 1954 to 1961 by Morsh, Madden, and Christal (1974), is very much in line with the trade-offs implied in Exhibit I.
Namely, the task inventory is:
1. simple and clear, as a stimulus,
2. job information captured in a quantifiable, standardized form,
3. a survey method backed by computer assistance,
* Therblig, as a name, was created by Frank Gilbreth to designate a small movement of a task
5. flexible while producing a broad sampling of the Air Force career fields,
6. and a system serving many users because of the rich descriptive power of task level measurements.
Air Force research also focused on the properties associated with good inventory development. For example, early studies reported in Archer and Fruchter (1963) and in Morsh and Archer (1967) illustrated the usefulness of:
1. beginning each statement with a present tense action verb,
2. deleting extraneous words which add little meaning, especially redundant qualifying phrases,
3. ensuring parallelism of tasks across duties in the inventory, and
4. making tasks time ratable.
Another part of the Air Force literature consists of reports on experience with different phases of inventory development. Field experiences on how to conduct a task inventory review
technical conference are collected by Morsh & Archer (1967).
C. Conducting Technical Conferences.
Before reviewing tasks with experts it is usually best to begin with the duty outline. The Subject Matter Experts (SMEs) are shown a list of the duty headings. These are reviewed one by one for clarity and accuracy. They are asked if any duties performed in the field are omitted from the list. If so, the missing duties are added.
Each SME is then given a copy of the job inventory, and the individual tasks are reviewed. The supervisory duties, such as Organizing and Planning, Inspecting, Evaluating, and Training, are postponed until last and often not reviewed. Beginning with the first non-supervisory duty in the inventory, the interviewer reads each task statement aloud and asks leading questions about particular items, as in the following examples.
1. Will everyone know what this means, or is it a local term?
2. Is this task covered by another task?
3. This one sounds pretty vague to me.
Could we make it more specific?
4. Would this fit better under Duty E?
5. This task says "Maintain logs." What kinds of logs does it cover? Maybe we should list this separately?
6. Are there any other tasks under this duty that are not listed?
Overall, perhaps ten years of military research effort was concentrated on different facets of the occupational survey method. Much of this research focused on methods for constructing and validating the task inventory. Of course, many other facets of the occupational survey method are highly interrelated to inventory construction - especially in the area of scaling and data analysis. While modifications and improvements to the overall methodology are being conducted continuously, the methods that go into the construction of the task inventory are a known quantity. That is, since the mid-l960s, the methodology hasn't changed appreciably. This fact, however, hides the difficulty of the skill required to develop an inventory. A number of these skills have rarely been dealt within the literature. For example, difficulty of inventory development may not lie in as much mastering the methodology as in having personnel with the personality traits and interpersonal skills associated with successful inventory development (Graham-Moore 1982). Interviews with supervisors of inventory development across the Navy, Army, Marines, and Coast Guard reveal these traits or characteristics that seem to be associated with successful inventory developers:
2. high verbal aptitude combined with a desire to write
3. appreciation for detail
4. a good memory and "ear" for the semantic properties of language
5. an openness in interpersonal relations combined with assertiveness
While there is agreement by supervisors of inventory development on the above personal characteristics, there is no agreement on whether this function should be specialized. That is, in the experience of some supervisors, there are developers who can manifest all of the personal characteristics listed above and can move into occupational analysis, which requires a different set of skills. There are also individuals who excel at inventory development and who prefer to specialize in this function.
The variety and range of task inventory development is contingent on the purposes for which it is being used. Usually, however, this change affects the way tasks are rated and not the basic construction of the inventory. As reported by Morsh (1964), the primary purposes for which the inventory is developed is to capture work performance at all skill levels so that a complete description of the career ladder can be defined. Another key purpose is to distinguish between task performance at different levels - if such differences exist. Once task frequency is understood, then a variety of analyses can be performed. These analyses permit the development of the following products:
With the advent of computer assisted library searches, it should be easier to locate contemporary reports on inventory development. Future literature reviews will be obsolete as a comprehensive literature search will be achieved by inputting appropriate keywords such as task inventory development, task analysis, CODAP.
D. Overview of the Sequential Steps of Inventory Development
After an inventory development project is assigned to an Air Force developer, a complete review of the Current Task Inventory Bank Case File (CTIB) is undertaken. The location and procurement of source materials has been centralized by this function. When the CTIB file isn't sufficient, documentation will be searched more broadly. If it exists, it will be made a part of the CTIB Case File for future efforts. If it doesnt, then the inventory will be built from scratch.
Functional representatives are consulted by the developer to determine the purpose of the occupational survey. For example, functional reps may wish to update training or evaluate the occupational structure of one or more career fields. Sometimes other special requests will be part of a survey request. The developer should initiate and maintain communication with all functional reps, e.g., Training Coordinators at police academies, other qualified police experts.
After guidance from functional reps, the developer will coordinate technical school visits - since most surveys are related with school training. Requests for subject matter experts are made at this time. This group interview and subsequent working relationships will be invaluable to the developer. Very likely, instructors at training academies will be an excellent group of experts to work with. They are technically competent and communicate well.
The police academy contact will lead to a greatly revised task list. Whether the developer started from scratch or updated an old inventory, this revised task list will make technical conferences more productive and ensure that training and certification issues are included in the survey instrument.
After police academy contact, a mailed field review is useful. In effect, it is a pilot test wherein a small sample of the career field receives a copy of the inventory by mail. Their responses are in the form of annotations to the questionnaire. These are analyzed by the inventory developer for all the principles of good task statements. Write-ins are evaluated as well.
E. Structuring Source Information into Duty-Task Lists
Structuring source information into duty-task lists is the outgrowth of the CTIB review process. This activity is described briefly in Morsh, Madden, 2nd Christal (1961), Archer and Fruchter (1963), and Mayo (1969). Examples of inventory usage by others (see appendix) also include information on this process. These sources are synthesized here and combined with the current experience of the inventory development function of the Commission.
A large function of work (such as "Patrol") may have many tasks which support it. Recognizing that this large function of work is a duty, the developer further attempts to subdivide a career field into all the duties or functions performed. A variety of ways can be pursued to identify these duties.
1. Identify broad headings in CTIB Case File documentation.
2. Identify organizational charts of work section depicting subdivisions.
3. Separate supervisory from non-supervisory functions.
4. Isolate functions which perform on one class of equipment, e.g., dog handling.
5. Isolate functions in terms of responsibilities of performing across different classes of equipment.
6. Create duty headings which portray the different jobs and skills of a career field.
7. Create duty headings which portray the similar jobs and skills of a career field.
8. Create a general or "catch-all" duty for tasks performed that apply to more than one specific duty and for tasks not relating to any specific duty.
Organizational charts, flow diagrams, and other work section sub-divisions often provide a starting point for a duty outline. Previous task inventories are a source. Assemble these documents into categories that are mutually exclusive yet general in scope. The categories should cover the entire career field. Sub-division of duties can always be done later.
It might help to separate supervisory from non-supervisory functions. This division will permit the developer to focus on the non-supervisory duty-tasks more clearly and efficiently. For example, the Appendix contains many empirically derived supervisory tasks. Thus, the developer can use this Appendix as a guide and modify it for the particular career field. There may be unique supervisory tasks, but this Appendix will greatly expedite this portion of inventory construction.
The following rules will aid in this phase.
1. Add tasks that discriminate between supervisors in career field.
2. Include only supervisory type tasks in the supervisory section.
3. Use verb "supervise" when referring to personnel
4. Use verb "direct" when referring to functions
5. Place all training tasks in a training duty.
6. Include only administrative tasks in an administrative duty, if possible.
Following these rules will organize the supervisory duty section into a cohesive and consistent unit, survey to survey. Also, no time will be lost processing supervisory duties while intending to process non-supervisory duties. Isolating functions in terms of responsibilities of performance on one class of equipment can be useful in establishing a duty outline. For example, duty headings in this instance could begin with typical patrol functions. If functions are the common denominator across types of assignments, personnel, or other subject matter, then the inventory developer must divide the work into these functions. By isolating functions, the duty headings should help in organizing documentation into appropriate categories. These categories refer to the broad areas of police work.
By all accounts, the best approach is to set up a duty outline. Using some or all of the steps mentioned above, the developer will have a duty outline reflecting mutually exclusive, but broadly stated areas of coverage. Each of these areas can then be progressively re-worked. Hopefully, each duty heading can be broken down to the same level of specificity. While it is not always possible to end up with the same level of task specificity, by working from duty-heading to task statement, task specificity is more under the control of the developer. Very likely, the developer will see a close connection between the duty heading and the action verbs used in the supporting task statement.
Reconciling the different duty-task breakdowns possible from documentary sources may require advice from subject matter experts. However, it is best to construct a first draft of the inventory before this step. Thus, many problems concerning task inventory construction can be dealt with at the same time. The goal is to achieve a duty outline that completely describes the functions of a career field.
F. Assigning Tasks to Duties
One of the key problems faced when assembling a draft inventory from documentary sources is the variety of ways a duty or task could be conceptualized. For example, a task can be described in terms of the tools used, the equipment worked on, the procedures used, or the end product. To combine all these descriptors into a task statement makes for an overly long task statement. Further, if all these descriptors are broken out into separate tasks, then mutual exclusiveness is not maintained. Also, analytical advantages are lost. Obviously, the developer must select that description of the task which provides the most useful information. No inventory should reflect tasks which only describe work in terms of tools, for example, because the inventory will fail to detect differences in jobs by those incumbents who use the same tools on different equipment. It is their action with the tool that is important. No simple rule of thumb can prevent this type of biasing produced by poor assignment of tasks to duties. Careful review of the "best" task descriptions with subject matter experts will guard against assigning poor tasks to poor duties. However, the written task statement must communicate in the language of the incumbent.
II. Consulting with Functional Representatives
At the inception of inventory development, the developer will review this file and initiate contacts with all pertinent functional representatives. Depending upon the purpose of the survey, the developer will plan his or her approach. For example, an inventory- update in a career field with no great changes recorded in the CTIB would suggest that close coordination with the academies combined with a small sample of experts from the field will produce a good inventory. These contacts will provide the developer with information affecting the coverage of the inventory. For example, technological changes which may be a function of new systems, equipment, or methods occurring in the field, must be captured and verified in the field. Thus, close contact with functional representatives provides the developer with a sensitivity to this information and knowledge of the kind of task coverage required. In this instance, more field visits with experts may be justified.
1. What are the principal functions of the career field?
2. Where are these functions carried out?
3. Are there current sources of documentation describing tasks, duties, training, maintenance? (solicit copies)
4. What are some of the changes you (have seen) (expect to see) in the career field?
5. What other people would you designate as particularly knowledgeable in this career field?
6. What are some of the career concerns prevalent in this field? (probe: promotion, obsolescence.)
7. Is there informal or on-the-job training going on? (please explain)
III. Developing and Updating the Task Inventory
A. Properties of Tasks
A task is a discrete behavior that is observable. Since a task is a behavior it refers to a specific action. These specific actions are independent of other actions. If completed, a task can stand alone because it has a logical beginning and end. Because tasks are observable behaviors that have all of the above properties, it follows that they are performed in relatively short periods of time - perhaps seconds, minutes, or hours and, rarely, days.
"The tasks which constitute a job are not homogeneous units of behavior, but are, rather, logically differentiated segments of work activity," (Morsh, Madden, and Christal, 1961). This means that tasks can vary in their specificity and content. For example, a task that consists of the action verb "compare" may be quite limited in scope and content when compared to a task with the action verb of "conduct." With "compare" or "check," the scope refers to the narrowness of the action, i.e. these actions are nominal behaviors. These actions refer to a human discrimination concerning the presence or absence of something. The content of these actions refers to what is being compared or checked - such as items on a list compared to the actual item. However, "conduct" or "coordinate" are tasks referring to actions which have greater scope. The scope of these actions is broader and the content is more complex, diffuse, or time consuming. Thus, one can see that "logically differentiated segments" or tasks of a job can vary in their specificity and content, yet still have all of the properties of tasks. Interestingly, tasks have remarkable stability over time because of their discrete nature. However, combinations of tasks which support and define duties (and jobs) change over time.
The task exists at a level of specificity somewhere between a duty (function or responsibility) and an element (a procedural step or sub-task). Tasks are caused by interaction with the work environment and therefore represent a mix of procedures, methods, techniques stimulated by equipment, material, people, information, and data concerns.
Significance of tasks refers to those tasks which are imbued with importance, responsibility, difficulty, and/or account for at least an average amount of time when compared to other tasks.
Since tasks are observable behaviors, they have the important property of being time ratable. This means that, having an observable beginning and end, most of us can compare the length of a task with the length of other tasks. In summary, tasks are defined by these properties. They are:
1. observable behavioral actions,
2. independent, with a clear beginning and end,
3. differentiable between levels of work,
4. stable over time,
5. time ratable,
6. significant, and
7. at a definable level of specificity.
It is instructional to review what things are not properties of tasks;
1. worker qualifications such as aptitude, intelligence, education, skill, and experience,
2. job responsibilities and work objectives that may be a part of a position description but are not tasks, and
3. global responsibilities of a work team, work center, or support group are not tasks.
All of the above are examples of things which help to describe factors associated with the work process, but they do not describe the actual work. They may be useful sources of information or justifications for tasks, but they are not tasks of the job. Jobs can exist whether or not people are available to staff them. Task analysis endeavors to describe the tasks performed and not the people who perform them. In the analyst's manual, we will see how attributes of people (such as education, training and skill) will aid in our understanding of the work performed, but the task inventory is a methodology designed to describe tasks performed. The distinction of what is and what is not a property of a task is crucial to the writing of task statements.
B. What is a Task Statement?
Writing a task statement that describes the task performed constitutes a basic contribution to good measurement. The task statement, in effect, codes work behaviors into discrete descriptors. The closer the task statement comes to describing the work behavior, the better the measurement. It should be clear that this task statement is designed to elicit a response. It permits the job incumbent to recall whether or not the task is a part of the job. If the statement is unclear or not precise, then the eliciting frame work fails as a stimulu. The assessment of reliability of a task statement over time, the assessment of validity of the task statement, and the connection of the task statement with other variables (such as time spent) all become impossible because poor task descriptions defeat good measurement.
When a task statement is properly structured, it will capture the properties of the task and provide good measurement. Thus, this statement is normally composed of a specific action verb and a brief identification of what is being acted upon. Possibly, a third component of the task statement exists whenever qualifiers are needed to distinguish the task from other tasks. This may be caused by the need to define that scope of the task statement or to communicate the task unambiguously. Exhibit II, illustrates these two or three components of the task statement.
The Structure of the Task Statement
Action Verb + Object Being Acted On + Necessary Qualifier
time study factors
using continuous technique
using snapback technique
It is obvious from the example above that clarity and brevity are essential to the components of the task structure because the job incumbent will be reading many of these statements. The pronoun "I" is always left off this eliciting framework, because it is understood. Thus, the incumbent reads "compute time study factors," but probably thinks the "I".
Incorrect structure will lead to ambiguity. Thus, the preferred format is present tense action verb and the direct object of that verb. A necessary qualifier may be added, if justified. Therefore, the structure of the task statement is always two, but never more than three components.
C. Purpose of the Task Statement
The task statement must describe the significant tasks of a career field at the correct level of specificity. If the statement describes the work content properly, then task commonality permits similar workers cluster easily. Meaningful differentiation between skill levels, job types, and where separate training is necessary must also characterize the task inventory. It is crucial that these two possibilities are served. It should be clear that only the job analysis that culminates
into appropriate task statements will achieve these two possibilities. For example:
1. "Interpret visual photographs" and "Interpret radarscope photos" would differentiate between visual and radar job types in the Photo Interpretation career field. "Interpret photographs," however, would not differentiate, since members of both tasks could say they perform the task. (Morsh and Archer, 1967).
2. "Type standardized formats" may allow officers who type on a variety of different forms to answer one task statement. If task specificity were increased, the direct object could change to "DWI report" or "accident report" instead of "standardized formats." Some trade-off of specificity for commonality has to be achieved by the developer if the purpose of the task inventory is to be served.
The key to determining what the task statement should reflect is found in the purpose of the survey. For example, if career fields are to be carefully described and their career ladders structured, then the best level of specificity is one which differentiates. If training purposes are to be served, then differentiation is desired. However, the overuse of modified direct objects may artificially separate workers into groups that do not explain the commonality between them. The next section will offer a set of task writing rules which will permit the achievement of description and differentiation.
D. Task Writing Rules
These task writing rules are principles to guide the inventory developer. Not all rules are mutually exclusive. Some may appear to be contradictory since the rules satisfy various conditions. A discussion follows the list. Judgment on the part of the developer is a key factor in deciding which of these rules is operative and which rules to break. Document these judgments if and when they occur.
1. Task statements should be brief.
2. Each task statement must be specific and capable of standing alone.
3. Task statements should not overlap or be redundant.
4. Avoid vague or ambiguous verbs.
5. Maintain the same level of task specificity, if possible.
6. Task statements should differentiate between workers or different job types within a career ladder.
7. Task statements must be clear and easily understood by the worker.
8. Task statements should differentiate between levels of supervision (supervisors and journeymen, journeymen and apprentices.)
9. Task terminology should be consistent with current usage in the career field
10. Task statements should have the same meaning for all workers in the career field.
11. Task statements must lend themselves to frequency ratings.
12. Abbreviations must be spelled out when first used in both the background and task list. In addition, where different respondents may perform different duties they should be spelled out more often.
13. Qualifiers should be avoided. If used, there should be a parallel task and explanation in project summary. (See #25, below)
14. Multiple verbs should be avoided in task statements. (If used, explain in project summary.)
15. Worker qualifications, such as intelligence, aptitude, education, training and experience are not tasks.
16. Receiving instruction is not a duty or a task unless actual work is performed during training.
17. Each task statement must be a complete statement.
18. If a direct object modifier is needed for specificity, include all other significant tasks with comparable direct object modifiers.
19. Punctuation: Avoid slashes (except when a part of a form title or brand name). Always use commas before conjunctions in a series. (Example: weld high, low, or medium carbon steel.) Above all, be consistent.
20. Conjunctions: Use "and" only when actions are invariably performed together (In most cases, use of one verb could suffice for both actions.) Do not use "and/or". Use "or" only when actions are interchangeable in terms of skill.
21. Verb consistency: Differences between verbs used in the inventory that appear to have similar meanings should be explained in project summary.
22. Multiple objects: Avoid where possible. When used, be sure they are interchangeable in terms of skills.
23. Spelling: Developer is responsible for correct spelling throughout inventory. Be consistent with career ladder usage.
24. Pluralization: All task statements are to be pluralized.
25. Parallel tasks should appear in appropriate duties. For example, for a task listed as "Inspect equipment" there is likely to be some kind of maintenance task performed, i.e. the parallel action.
26. Tasks may differentiate between career ladders if more than one ladder is covered in the inventory.
27. The task statement should have no more than two lines, be brief (action verb, object, and, perhaps, a qualifier).
28. Similar tasks requiring significantly different training require separate tasks.
Task statement brevity and clarity are achievable goals if the developer adheres to as many of the task writing rules as possible. For example, if an action verb and direct object describe a particular task, then brevity is accomplished. There are only two components of a task statement to read. When these two components utilize the words which most closely name the task performed, the job incumbent is able to recall more clearly. Also, by capturing the task level of the job content, larger activities or functions will not be described. For example, "remove fuel pumps" is usually more preferable than "remove fuel systems." Removing fuel systems could be a function, duty, or a more global task (one not stated at the correct level of specificity).
Since each task statement must be specific and capable of standing alone, there should be no overlapping or redundant tasks. For example, independence of a task statement can often be judged by the scope of the direct object. In the case of "fuel systems" versus "fuel pumps," the developer must decide which of these levels is independent, stands alone, and does not overlap actual tasks. "Decomposing" or breaking down tasks into their component steps will generally reveal if underlying tasks are being disguised. In the example above, "fuel systems" disguises independent tasks - tasks which qualify as independent, mutually exclusive, and inherently different. This inherent difference may lead to task performance on different types of fuel pumps requiring different skills, knowledge, and ability.
Avoidance of vague or ambiguous verbs requires constant vigil on the part of the developer. Certain verbs should never be used because they are passive, e.g. "have responsibility for," and "understand," or "participate in." The developer should also keep in mind that a verb's vagueness is sometimes a function of the level of specificity. Some verbs in one context will be vague or ambiguous. The test for this is that they disguise the behavior to be described. "Maintain," for example, can refer to an outcome - the result of numerous tasks culminating in maintaining. "Maintain" could also refer to responsibility for maintaining but no direct behaviors on the part of the responsible person. At one level of specificity "maintain" is inappropriate in describing the job content. At another level, "maintain" might qualify as an action verb since the scope of the action is very broad. The developer should have an understanding of the level of task specificity needed for the study. As mentioned in the "Task Writing Rules," descriptive adequacy will govern the level of specificity. That is, the action verb closest to the behavior of the true task and usage of the worker is the one that describes best.
It is a requirement that task statements be ratable. Since many new uses are being found for task analysis, the task statement must be worded so that any rating scale used in occupational analysis makes good sense. A practical step in this direction is to write the task statement and place the scale next to it. Ask naive raters to read and rate the task and its scale. If there is any doubt as to its clarity and logical connection, then the task statement will not elicit a useful response. Its scale will have poor measurement properties as well. Most task statements that are ratable by frequency have been found to satisfy the requirements of secondary rating efforts by subject matter experts.
A number of the "Task Writing Rules" are very specific and need no clarification, e.g. style and format rules. Some rules could be waived for good reason. This doesn't lessen the importance of rules to the developer. A clean, neat, well laid out task inventory is, in and of itself, a precise data collection instrument. All errors, flaws, and inconsistencies detract from the purpose of the questionnaire; namely, the purpose is to unambiguously stimulate total recall. Therefore, rules on punctuation, capitalization, form titles are just as important as those rules which require more independent judgment.
Many rules are based in procedures which have been tested empirically by U. S. Air Force researchers. For example, task statements should be organized by duty and arranged alphabetically within duty. Unpublished studies have shown that arranging tasks by duty shortens administration time of the questionnaire since respondents have skipped duties where they believe they spend no time. This is contrary to their instructions. Respondents are specifically requested to read all tasks. This is especially true where a duty is structured around a specific system or piece of equipment, e.g. "Clean Assault Rifle." If skipping duties may lead to incomplete task response then the developer can group duties at a higher functional level, e.g. "Cleaning Weapons". Thus, the gain derived by organizing the inventory by duty is ease of administration and recall of work performed. The loss might be in the form of inaccurate, selective response by the job incumbent. The control for the above situation is the developer exercising judgment on duty organization and titling.
Since shorter inventories are preferable in terms of administration, organizing them by equipment or system-specific duty is preferred. In addition to this, task statements are arranged alphabetically within duty for several good reasons. One is that the incumbent can scan very quickly through a list of tasks beginning with the work 'Inspect' to make sure that all inspections he performs are in the inventory" (Morsh and Archer, 1967). Another reason is that any chronological sequence of tasks is probably interrupted by the alphabetical arrangement. This aids the incumbent in establishing rating values for a given task while improving the reliability of the results. Task inventories can and have been arranged in random order and in chronological order. Random order appears to be very scientific, but incumbent recall is more difficult. Very long inventories would probably suffer a loss in reliability due to randomization. Chronological sequence on time ordered tasks, has been shown to be too difficult to achieve, except for highly programmed occupations. Usually, there will be tasks in one sequence which will also exist in other sequences. Therefore, the gains made by presenting tasks chronologically may be offset by the difficulty in rating tasks reliably. With frequency ratings, the incumbent may only be rating a task in a given sequence and not across the entire job.
Qualifiers should be used judiciously. A qualifier makes the task statement longer. Thus, the qualifier should be essential to the meaning of the task statement. In other words, if the qualifier differentiates between two tasks, it serves its purpose. However, all qualifiers raise the issue of parallel tasks. This means that to include the qualifier indicates that there are multiple ways of doing the task. For example:
1. Write position description for clerical jobs.
2. Write position description for wage grade jobs.
3. Write position description for professional/administrative jobs.
Sometimes, multiple ways of doing a task are so variable that a more general task statement is preferred. As Mayo (1969) points out, "Coordinate with supply department to resolve problems" can be more economical in task statement construction even though "coordinate" is not very specific. Even if a series of specific task statements for each problem and each method were utilized, we still might not capture these idiosyncratic categories of task performance. Also, we might overweight the time spent measure.
Multiple purposes for a task may require qualifiers. In effect, why a task is performed may be a reason to differentiate it. For example, conducting patrol might be stated in various ways depending upon the purpose:
1. Inspect power plant for overheating.
2. Inspect power plant for oil level.
3. Inspect power plant for physical damage.
4. Inspect power plant for anti-trust preventative.
As Ammerman (1977) points out, each of these tasks may be performed with different frequencies. The purpose, in this case, could be of concern for formal school training- Here again, however, multiple purpose tasks can lead to problems. As Fruchter, Morin, and Archer (1963) noted, there could be such a variety of purposes that judgment must be used before splitting a task into many task statements. The test is to add a qualifier when purpose truly reflects a different activity. This means that other parallel tasks will be written as well.
Multiple situations and conditions of task performance may cause the developer to qualify the task statement with when and where it is performed. This could be especially true for environmental differences, stress factors such as civil disturbance zones, and emergency operating conditions. The same test applies, however, in that multiple situations and conditions must reflect truly different tasks.
Range of task performance may cause a qualifier to be written. When limits are placed on a task, then the task statement may need to reflect this constraint. For example:
1. Lecture large groups of adult trainees (the parallel task may be "lecture large group of grade school children."
2. Discharge firearm in dark environment while using a flashlight
3. Discharge firearm in low light conditions
Qualifiers inevitably raise the issue of task specificity as well as parallel tasks. In fact, task specificity underlies many of the "Task Writing Rules." Oftentimes, the developer is overly concerned with task specificity because the level of specificity varies with the occupation studied. Thus, a task for one occupation may be a sub-task for another. If a developer is in a quandry as to the actual level of specificity, then a good rule is to move to a more discrete level and write two or more tasks. If field testing doesn't reveal that this level of specificity is too focused, then data analysis may reveal this -- assuming the task statements are actually sub-tasks. One empirical test in data analysis could be that the sub-tasks co-vary almost perfectly. That is, the job incumbents always respond to both sub-tasks. Data analysis should reveal this.
Multiple verbs should be avoided whenever possible. Multiple objects should also be avoided. The reason why both multiple verbs and objects should be avoided is the risk of linking several tasks in a compound sentence. For example:
1. Multiple verbs; "Inspect and repair firearms"
2. Multiple objects; "Prepare cost report or cost estimate for new equipment"
In these examples, two tasks are being linked in one task statement. Whenever it is necessary to link two or more tasks together in one task statement, the developer should have a good reason to justify this. There are only two reasons why a task statement with multiple verbs and objects might qualify as a good task statement. The reasons are:
1. the actions occur together as a whole unit of work, i.e. the actual task,
2. and inking multiple verbs and objects clarifies a single type of work activity and that reasonably involves the same training or skills.
The following examples of multiple verbs and objects represent acceptable tasks if one or both of the above conditions exist:
"Clean, gap, or test spark plugs"
"Lubricate front or rear suspension"
"Address letters or packages"
"Record time card or time clock data on payroll forms"
"Remove or install fuel level control or float valves"
Working qualifications, such as intelligence, aptitude, education, training, and experience, are not tasks. These qualifications support task performance but are not part of the behavior. Since they are not part of the task behavior, they cannot be observed objectively and they do not stand alone. As important as these qualifications are, they are assessed by another methodology and not the task inventory occupational survey method. While, workers can recall which tasks they usually perform when appropriate task inventories are presented, they cannot, with any accuracy, provide good data on intelligence or aptitude. These areas are best studied by other means.
Generally, receiving instruction is not a duty or a task unless actual work is performed during training. On-the-job training can include task performance under supervision, such as in the case of a Field Training Officer (FTO). In this case, time spent on task meets the criterion for inclusion. The primary distinction for exclusion is classroom instruction, shop, or laboratory instruction whenever the individual receives training but does not perform tasks as part of a job. Giving instruction is included under "Training" because it is a supervisory task. Possible exceptions to tasks covering training found in other duties could be encountered in the FTO example.
Verb consistency aids in the administration of a clear, unambiguous inventory. Therefore, use of verbs with similar meanings should be simplified by selecting one verb to represent the set of verbs with the similar meaning providing they are clear to respondents. When this goal cannot be achieved, use of various verbs with the same meaning needs to be carefully considered while shades of meaning should not be lost.
Parallel tasks may not appear in the same duties. Usually, these refer to the performance of one task which suggests the performance of another task. The parallel tasks may be part of a sequence of tasks, and each task within a sequence should be placed under its appropriate duty. The analyst may identify one task that has other tasks connected to it, that are "parallel" because they may not meet under their respective duties. For example, for a task identified as "Inspect fuel pumps" there will likely be "Maintain fuel pumps" included under the duty Maintenance" or "Repair Fuel pumps," included under the duty "Repair."
Task Writing Rules and Subject Matter Experts: Some Considerations
It is not necessary to teach task writing rules to the subject matter experts. However, it is important for the developer to consistently apply them. After a while the experts will model their task statements after successful examples of task statements written by the developer.
If the inventory is an update, then it is a wise idea to develop tasks for the duty headings before providing a copy of the preliminary inventory to the experts. This step can serve as a check against the adequacy of old inventory. Once the preliminary inventory is in the hands of the experts, it may be more difficult to elicit any new or different duties than those reflected in the preliminary inventory.
Whether developing an inventory from scratch or updating a previous inventory, a duty outline begins the task extraction process. The experts review each duty heading for accuracy and clarity. Any omitted duties are added at this time.
Beginning with non-supervisory duties, the developer could read each task statement aloud from the preliminary inventory or the old inventory. Sometimes its helpful to share this reading aloud task with the SMEs. Experience has shown that starting with a technical worker duty keeps the SME "tracking" because these are "pure" tasks. Pure tasks are those which are technically specific to an occupation. Careful utilization of the following questions will produce acceptable task statements.
Is this task described correctly?
This task sounds pretty vague. Could we make it more specific?
Would this task fit better under Duty F?
Is this task accomplished in the same way in different organizations?
This task sounds pretty specific. Should we make it more general?
Are other components covered?
F. Extracting Task Statements
Extracting task statements refers to a method for eliciting tasks from SMEs. In part, this method is interviewing. Some useful elements of interviewing will be introduced here. The selection of SMEs was discussed earlier but additional aspects of their selection will be covered here. Whether developing an inventory from scratch or up-dating an inventory, the developer will have a duty outline. From this duty-outline the developer has a framework with which he or she can begin. Adjustments to the duty outline are always possible, but, at a minimum, the developer needs this to focus experts in their effort to create task statements.
At the beginning of the first interview, the developer needs to establish:
1. his or her introduction,
2. the purpose of what the group is doing,
3. and a professional attitude.
The introduction should include information about the developer, his or her preparation, and the objectives to be served. The following example can be adapted to your own personal style.
I have developed a preliminary job inventory for _____________ . Our task is to review and expand upon this inventory. It is a duty and task list which covers these types of jobs. I want to make sure that what we produce here is complete. The Commission will use this inventory in a state-wide survey of peace officer. The purpose of the survey is to collect up-to-date job information.
Some of you may have seen these surveys. If you havent, each person surveyed receives a copy of the inventory we will develop here. They check the tasks they perform. They will also indicate how frequently they perform each task, plus provide other information about themselves and the work they do.
Some supervisors will also be sampled to rate these same tasks for task difficulty, consequences of inadequate performance, and other factors of training.
The total sample will probably exceed 2,000 and the data will be analyzed to see what work is being done by different people in different types of jobs. The results of this analysis are very important because the Commission will use them to assist in the development of training. In fact, the results will be used in many ways because job analysis is such a basic and sound way to understand the requirements of work.
Here is the duty-task outline I've developed. It reflects my work to collect tasks from previously published documents. Im sure it is incomplete and we need to rework and expand upon this list. Our goal is to make the inventory as complete and accurate as possible.
Let's take any questions you have up to this point.
Never proceed to the next phase of development until all questions are answered. Be prepared to answer questions on the validity and reliability of the survey method as well as how the information may be used. Review the importance of duties once again.
Preferably, use a flipchart on which to list duty headings and tasks. By controlling the stimulus, the developer can focus attention on the duties on the flipchart. When working on an entirely new inventory, this procedure is probably best. Pacing of the groups work is in the hands of the developer. The advantage of a flipchart over a blackboard is that once a duty or task is written, it could be circled or otherwise marked and copied to some other medium later. The sheer task of writing the task by the developer when the SMEs are working smoothly will slow the pace--assuming task rules are being met. While other recording methods can be used, the flipchart works best in this situation.
These prompts help SMEs work effectively:
Are there other tasks which should be under this duty?
Does this (verb) (word) (qualifier) describes the task or behavior?
This task is to "Inspect equipment." Are there other tasks done with this equipment?
Let's compare this task with (from another duty). Are there any differences?
That sounds like a qualification for the work involved. What is the actual behavior or task the worker does?
Receiving training is not a duty or task unless actual work is performed. Is it?
That sounds like it might be two tasks. Are they done separately or always together?
Does this task overlap with task X?
Which of these action verbs will most workers readily understand?
Are there unique tasks or functions found in that type of job or assignment?
These tasks seem less specific than those of Duty F. Have we captured them at the right level of specificity?
Are there differences in the way this task is performed on different types of equipment?
Would this involve substantial training?
With almost all of the above questions, the developer can probe or follow up the question with:
3. Can you give me an example?
4. Do the rest of you perceive this task in the same way?
As each acceptable task is designated as a "possible" the developer communicates a standard of task statements to the experts. This means that the application of the "task rules" must be correct and consistent. It may also mean that members of the group may ask for task definitions and task writing rules. Always provide this information in measured response to the question. For example, one SME seems frustrated and says "Well, what is a task?" The best response is to go to the blackboard and provide the definition provided earlier, i.e. a task is a significant, observable behavior that is discrete, and has a frequency to it. It may be appropriate to discuss this definition and even provide a few examples (preferably from those just elicited). If this is not sufficient, provide examples of what is not a task.
Each duty section should be completed before going on to the next duty, yet tasks generated which create parallel tasks will cause the developer to post these tasks as they occur under the correct duty. When the non-supervisory duties are completed, the supervisory tasks could be reviewed.
Developers should expect technical conferences to run smoothly. By communicating your professionalism and applying the task writing rules, technical school and operational base interviews will normally proceed in an uncomplicated manner. Nevertheless, it is wise to prepare for productive group interviews by:
1. selecting appropriate SMEs,
2. scheduling the timing of the group interview for maximum effect,
3. sampling representative areas of the career field,
4. organizing source materials and documenting aids to facilitate a smooth running group interview,
5. and communicating professionalism.
Selecting appropriate SMEs will greatly improve the speed and quality of task statement extraction. Personnel with seven years of experience have been found to have sufficiently balanced experience. Personnel with less experience can provide information on worker tasks done at their particular location. Their knowledge of the entire occupation is usually limited. Personnel with fifteen or more years in an occupation can provide extensive information due to their long experience in the field, but they may no longer be keenly conversant with all the changes in lower level tasks and procedures. Therefore, the optimal group tends to have between seven to fourteen years of experience.
Since the developer will find wide variation in individual experience, there is no guarantee that merely selecting individuals with seven to fourteen years experienced will suffice. Clearly, overcoming problems associated with extracting quality task statements in an expeditious manner depends on selecting SMEs with current knowledge and practical experience. Screening your SMEs is crucial. Imagine yourself as a human resources manager with hiring criteria. Before you start a group interview, individual reviews of an expert's qualifications can be combined with individual interviews. Screening criteria could include:
In spite of being able to apply the above selection criteria, the developer must remain flexible because it is impossible to know which experts will provide the most useful and complete task statements. While cooperation is important, some experts may not be challenged by the task of writing many task statements. Ascertaining which experts will produce is critical. Deselecting individuals after applying the screening criteria is unusual, but developers have rejected and/or replaced experts after group interviews have begun. If for any reason the developer is not extracting the acceptable quality and quantity from group interviewees, then this action is supportable. That is, ask the SME to leave.
Sampling representative duties is fundamental to extracting task statements. While the development of a preliminary inventory from documentation is important, each field visit with subject matter experts is more important. Thus, missing areas of occupational experience could negate all previous steps of inventory construction. That is, the existence of new jobs or emergence of new procedures or technology is best captured in the field. While scientific sampling of experts is possible using a master list, it is highly unlikely the developer can utilize this choice. Nevertheless, only the developer should be aware of sampling bias that may exist. Careful selection of SMEs to capture variations in the occupational content is the only antidote.
Organizing source materials and documentary aids in advance will facilitate a smooth running group interview. Extracting task statements and recording them should not be hindered by record keeping or reference checking. A flipchart, blackboard, and 4 x 6 cards could all be used at the same time. When preliminary inventories are used, sufficient white space around each task encourages additions, deletions, and corrections by experts. Since a series of group interviews will be held while extracting task statements, no definitive action is taken on task statements until the inventory is finalized. Each group interview could yield different sets of information. Therefore, the developer needs to collate large amounts of information quickly. An assistant in this process is very helpful. The codes below can be entered in the left margin of the draft inventory or next to recorded task statements. They speed up note taking.
N - New task statements
R - Revision or rewording of an original task statement
D - Deletion of an original statement
F - Format revision, e.g., "This task should be listed under Duty 3."
E - Explanation given by the SMEs for one of the above items. For example, a code of ER means "Explanation for a Revision".
C - All other comments
X - Cross reference to last inventory task number (this will permit reviewing percent performing last time and serve to prompt expected increases or decreases in task analysis.
Communicating professionalism to SMEs will help to overcome problems of individual and group interviews. For example, preparation and understanding of the career field will show experts that the developer has done his or her homework. Sincerity and enthusiasm for occupational analysis is communicated by the demeanor of the developer during this task. Pacing of the interviewees is entirely in the developer's hands. Expect some boredom and lack of concentration in group interviews. Deal with it by being aware, soliciting feedback, and suggesting breaks at appropriate times. Generally, experts enjoy the task and will reinforce each other's interest in their profession. By clearly demonstrating that he or she is charge, competent, and prepared the developer guides the climate of the interview and reinforces a task oriented process.
The issue of rest breaks will come up. Any system of breaks which permits the group to achieve its goal is acceptable. Long work sessions with regular breaks every hour have been used. Unusually long breaks, if the experts break as a group and stay together, have been successful. The developer must assess each group and establish a system that works best. A democratic decision about breaks is acceptable if the group has worked together for a few hours and understands the demands of their task.
H. Editing and Screening Task Statements
Editing of task statements is an ongoing function. At every phase of inventory construction, editing principles can be applied. For example, the application of task writing rules while in process with SMEs will help to shape task statements into the correct format. However, at the end of each interview or review session, the re-application of task writing rules is appropriate. If each task statement satisfies these rules, then the next step is to collate these statements into their appropriate duties and arrange them alphabetically within duty.
At the editing stage it seems likely that task statements will have a single action verb and object. Should they have qualifiers, the test for this is clarity, i.e., the qualifier is necessary to make the task clear. Editing for consistency, format, and application of task writing rules will reveal these kinds of problems:
a. redundant tasks
b. double meanings of words
c. extraneous words
d. correct level of task specificity
Sort verbs by alphabetically within each duty and for the entire inventory. These two listings can be screened for redundant tasks since there is a good chance similar actions will be coded by the same verbs. At the same time, these two listings may surface new parallel tasks. Be extremely critical of words with imprecise meaning. Edit out extraneous words. Keep task statements short, simple, and clear.
A second listing of all alphabetized nouns by inventory and by duty should also be produced. This listing will reveal redundant tasks wherein the action verb may be different, but the thing acted upon (the object, which is usually a noun or pronoun) is the same. This type of listing will reveal redundancies and help eliminate duplications. It will also critique the developer's efforts by suggesting where more parallel tasks may be needed. If the developer lists all persons, places, and things in order to create a noun listing, the list will be much longer than the number of task statements. Nevertheless, this step is helpful. This step will also point out the actual usage of qualifiers because the longer the qualifier, the longer the noun listing will be. Therefore, developers who employ noun listing will have a listing which clearly shows which tasks have a high preponderance of qualifiers - possibly to the detriment of task writing rules.
Both verb and noun alphabetical listings are very helpful in improving the quality of the task inventory. The problems raised will be self evident, i.e., redundant or overlapping tasks, the need for parallel tasks. However, task specificity, which is important to the measurement of tasks, is not directly affected by alphabetical listings. If editing reveals that there is uncertainty as to the correct level of specificity, then the developer must create a procedure to establish the correct level of specificity for future conferences. If warranted, experts can rate the level of specificity in a manner.
If there is uncertainty that activity statements are sufficiently specific and not representing general levels of work, it is possible to have a few people rate each statement. This may be useful also as a learning device to use with persons who are inexperienced in stating tasks, to draw their attention to discrepant statements.
Obtain at least five persons who are sufficiently knowledgeable of the occupation to understand what work is represented by the listed activity statements. Describe to them the intended level of specificity, illustrating with statements that are too broad and general, too specific and detailed, and some that are at the correct level of specificity for a task. These illustrations need not be from the same occupation.
Ask these persons to read each activity statement carefully and to give it a rating from one to seven, with a rating of 4 being used for a proper task level of work activity. Their ratings can be written beside each listed task, and the developer can later summarize for all raters on a separate listing. Caution the raters not to use the high end of the scale to reflect good, clear statements of tasks, with the low end being poor statements. This is to be a rating of specificity of the activity represented, not of the clarity of the statement. The "4" rating is the desired objective, not the "7."
When five raters are used, four out of five should have rated an activity statement at a level of 3, 4, or 5. Any tasks that fail to achieve this degree of agreement should then be tagged for more critical review. If necessary, additional reviewers can be obtained and Step 4 repeated for those tagged activity statements.
The procedure can be done at any time, but would most likely occur within a technical conference after the editing of task statements revealed a task specificity problem. However, the procedure is only useful if the raters know the purposes for which these tasks are to be used.
I. Building the System, Equipment, and Tools List
Some career fields will require a System, Equipment, and/or Tools (SET) list. Since task statements should use qualifiers judiciously, a SET list will aid in reducing the number of qualifiers. The use of a SET item, in an of itself, is not necessarily a task activity. While there may be a desire to state these as tasks, it is much more useful to list them separately. Thus, the SET list will aid in omitting many action verbs such as "operate" or "use" - especially where these could be elements or subtasks.
1. These simple rules will aid in the development of a SET list:
a. the SET item should help in the analysis of jobs by clearly differentiating possible group or skill levels
b. the SET list should be as brief as possible
c. the SET list should categorize items rather than list specific items, if possible
d. Tools and Test Equipment Used: A list of tools or test equipment used will aid evaluation of courses. Many technical courses have blocks of instruction in which tools and test equipment are taught. This instruction is difficult to evaluate with tasks. In addition, tools used often correspond to different job groups
e. Equipment Models and Brandnames: Listing a series of repair tasks for every variation of equipment such as pumps and valves is in feasible in a long task list. Listing brand names or model numbers of equipment in the SET can reduce the size of the task list. This information is helpful when planning which equipment should be taught in technical school.
Remember that the SET list is part of the Background Section; It performs unique functions, however, so paying careful attention to this area of development will aid in reducing the length of the task inventory while providing useful information for job typing and training
The appendix includes examples of inventories with different approaches to a SET list. The developer should carefully review both for the trade-offs necessary for successful analysis. The SET list can be correlated with task performance in the analysis stage.
J. Initial Field Visits
The Interview Process
This section will cover the subjective factors which are central to a "give and get" interview conducted in job analysis. Elsewhere in this manual one can find information on the principles of task statement writing or specific questions one can ask to clarify task statements. Independent of this type of objective information is the important, but fuzzy issue of subjective interview factors.
Subjectivity refers to one's inner perspective. How one SME perceives a task is a function of this inner perspective. Though tasks are purely objective, they are "produced" in interviews. This method of collecting data requires more than cooperation on the part of SMEs. It requires the developer to conduct an interview with sensitivity towards subjective factors. That is, the skills of a well qualified interviewer will greatly aid the SMEs in their objective - to produce task statements of high quality. Therefore, the developer wishes to guide the interview process to a successful completion.
The subjective factors which will influence the adequacy of field investigation are:
3. social circumstance
1. Time. Since the developer will spend relatively little time with a group, he or she is perceived as an outsider, or, at least, a newcomer. That is, the developer is a visitor who is not a specialist in the career field. The developer can fully expect to be on trial for the first two hours of a day-long session. Perhaps, the developer will never move beyond this phase and become an accepted member of the group. Usefulness of your data depends upon the full cooperation of the SMEs.
Since the developer controls the use of time, he or she should recognize that it affects the groups perception of him or her is important. Therefore, a professional demeanor which reflects the developer's preparation, sincerity, and enthusiasm for the task at hand will aid in overcoming the of time with the group. Since the group will not understand the standard to which it must work, the first two hours are critical in communicating the mission. Without other information to judge, the experts will judge the developer by his or her use of time. Their acceptance will be explained by the preparation and demeanor of the developer since exposure to them is so short.
2. Place. The physical environment should be conducive to interviewing. It should be free from distractions such as ringing telephones. It should permit all interviewees to see a flipchart, blackboard, and each other. Ideally, the group should work at a large circular table. In sum, the place of interviewing should not be a competing stimulus vying for the attention of the experts. It should be a comfortable, but worklike environment.
3. Social Circumstances. Differences in rank, experience, and status between interviewees or between interviewer and interviewee can influence information given. Only the developer can conduct the technical conference. This means to ask, clarify, verify, and probe in an accepting but firm manner. No one expert should dominate the group. No developer should be an "order taker." Social circumstances can contribute to obstacles in the way of giving and getting information. The developer should be aware of social circumstances in order to minimize them or temporarily set them aside.
4. Language. The developer's sensitivity to connotations, phrasings, and nomenclature specific to the peace officer career field will enhance the adequacy of a "give and get" interview. Interpretation of tasks and their appropriate coding is contingent on rapidly acquiring this sensitivity. Be especially observant of words most often repeated, slang, special technical terms, and abbreviations or acronyms peculiar to the career field.
5. Rapport. Rapport implies a close, harmonious relationship. Since a "give and get" interview is semi-structured and, task oriented, rapport is an indication that there is mutual acceptance. The behaviors you manifest are sincerity, acceptance of others, honesty, and a willingness to be open. Reinforcing these same behaviors in SMEs will help to encourage rapport.
Subjective factors that can influence the interview process, item, place, social circumstances, language, consensus, and rapport have been reviewed. Each has some bearing on the interview process as well as interacting with each other. For example, rapport is most affected by time, place, social circumstances, and consensus. As subjective factors, the developer constantly assesses their presence and influence. Adjustments to the needs of each group are under the control of the developer.
These specific suggestions come from Air Force experience:
1.) Individual interviews are more time consuming, but yield a high number of quality task statements.
2.) Group interviews should be limited in size to no more than six, with four optimal.
3.) Group interviews, i.e., the technical conference, are best when time is limited, because consensus can be reached on conflicting points of disagreement, diversity of group stimulates productivity, and allows broader sampling of experience.
4.) Large status differences are dysfunctional to productive groups, e.g. immediate supervisors of experts may constrain productivity.
5.) After introduction, soliciting an overview of the occupation from the experts is an excellent way to establish rapport for a "give and get interview."
6.) Keep the interviewees focused on the tasks of their occupation.
7.) Be open and non-evaluative to encourage participation but assertive when tasks do not satisfy the task writing rules.
8.) Do not let any one expert dominate the group.
9.) Draw out and elicit tasks from less talkative interviewees, i.e., canvass.
10.) Allow experts to decide the tasks but establish that they understand what a task is.
11.) Take breaks whenever justified, i.e., at normal times, when fatigue is setting in.`
12.) Insure throughout the interview that tasks are discrete, i.e. each task is performed in its entirety, and that each task is separate and distinct from all other tasks in the inventory.
13.) At any given time be able to define a task and relate that to the appropriate level of specificity for a career field.
TASK: a discrete, observable behavior, an action that is significant, begins and ends, performed in relatively short periods of time.
LEVEL OF SPECIFICITY: the degree of exact task description (action verb and object) which captures the correct task level behavior for a given career field. That is, a task specifically describes the behavior meeting the definition above and falls between duty and element or sub-task.
14.) Building inventories is easiest if a developer begins the interview with "pure" tasks, that is, those which are technical to a given occupation. For example, maintenance career fields start easiest with equipment related tasks.
15.) At conclusion of the interview:
a. Review completed task list against tentative task list and documentation to insure complete coverage.
b. Job functions not covered in interviews need to be developed. Ask interviewees where this information can be obtained.
c. Exchange names and phone numbers for follow-up.
6. Consensus. When experts write task statements, they should agree on the meaning of words used in writing tasks. Verbal and non-verbal agreement and disagreement can be subtle behaviors. The developer should confirm agreement and probe areas of disagreement with members of the group. Sources of power and influence (e.g. social circumstances) may affect consensus. Without detecting this and intervening by probing, the developer could become an "order taker." To achieve consensus, canvass all SMEs from time to time to indicate just how important each opinion is.
K. Final Equipment List
Adjustments are continually made to the task inventory because of the developmental steps leading to finalization. The SET list will therefore require adjustment too. Since the developer strives to keep qualifiers to a minimum, a comprehensive SET list will aid that goal because many tasks which start with "operate" or "use" "X" system, equipment and/or tool can be deleted in favor of using the SET section of the questionnaire (see Section IV E).
Judgment is exercised with the SET list because, in general, the list will decrease the total number of task statements. Often, this reduction accomplishes this logic:
Set item in
combines these to produce a Task
In the above example, the developer would be confident that the frequency of the rating adequately measures the behavior of the incumbent. If each of the sub-tasks were listed in the inventory, then smaller time spent values would result and their interpretation would be more difficult.
Since the converse of the above example exists, the developer must decide how a SET item affects the inventory. For example, this logic works this way:
SET item in
A Task 1 with SET as a qualifier
A Task 2 with SET as a qualifier
A Task 3 with SET as a qualifier
In this example, inclusion of the SET item in the task statement is justified, but it will not reduce the length of the inventory. Since the distribution of use of the SET item could vary across tasks 1, 2, and 3, the task statements and the SET item are useful in both places.
L. Writing the Project Summary
A Project Summary serves the purpose of communicating the context surrounding the developer's work. For example, the reason the inventory was developed should be included in the summary. Special considerations need to be noted, such as changes in police responsibilities and new equipment. The following outline has proven useful in organizing the Project Summary.
1. What were the reasons the inventory was requested?
(usually multiple reasons)
2. Special considerations surrounding inventory.
B.) Interview Locations and Selection Process
1. How were interview locations determined
2. Why were these locations a good source (why not other locations)?
3. List all organizations visited
4. List all names of staff personnel contacted such as classification, training, course supervisors, functional representatives.
C.) Task List Organization
1. How and why task list is organized, e.g., function; assembly, equipment
2. Cite major reasons why the task list has changed (if updated from previously developed task list)
D.) General Questionnaire Information
1. Include rationale for selected background questions
2. Include any additional information helpful in explaining types of equipment, career ladders issues, newness of equipment, possible overlap of jobs
3. Note special locations of equipment
4. Note career field changes that might occur
5. Cite any career field problems that the analyst should know
6. Note any special use of equipment, for example, type XYZ equipment is used only in SWAT units.
7. Cite any problems encountered in development e.g. personnel, arrangement if visits.
8. Note any deviations to the "Task Writing Rules"
M. Survey Expectations
1. Record subjective estimates of training, morale, manpower, utilization, adequacy, field organization.
2. What are the expected differences in perception of training emphasis, or task difficulty by what jobs.
Therefore, the outline is a rough guide and will serve to shape the developer's thinking towards and effective and useful Project Summary. In this way, the purposes of the summary will be met by communicating any and all necessary context surrounding inventory development.
The inventory developer faces the challenging area of sampling at least twice. Initially, the developer's concern for coverage of the career field leads to an understanding of the functional description of the field. Then, the distribution of incumbents across those functional areas needs to be understood. The combination of these two pieces of information will guide the developer toward an adequate picture of the "true" distribution of career field work.
One advantage of the task inventory is that enumeration is often possible because it is a survey oriented method. At least, sample sizes can be very large due to the nature of this methodology. Therefore, high generalizability should be achieved with representative and large samples when a complete census isnt cost effective.
III. Summary and Conclusions
The construction of the task inventory is the first important step of this type of job analysis and needs analysis. This manual defines the steps of inventory construction and provides exposition on the state-of-the-art as developed by Air Force researchers and adapted to the Commission.
The keys to successful inventory development are:
1. exhaust all documentary sources until duties or broad functional areas are known,
2. separate supervisory from nonsupervisory functions,
3. work with adequate samples of SMEs to produce task statements which meet the criteria stated in this manual,
4. construct the task inventory at the correct level of specificity to satisfy analysis objectives,
5. assemble background items and equipment lists that produce the information that the occupational analyst and functional representatives can utilize,
6. test the inventory in the field for clarity, accuracy, and exhaustiveness,
7. and create explanatory information about the context surrounding inventory construction and the career field so data analysis can be straight forward.
Ammerman, H. L. (March, 1977). Performance content for job training; (Vol. 2): Stating the tasks of the job, (R and D series No. 122) Columbus. The Ohio State University, The Center for Vocational Education,
Archer, W. B. & Fruchter, D. A. (1963). The construction, review, and administrative of Air Force job inventories, PRL-TDR-63-21. Lackland AFB, Texas,
Christal, R. E. (1974). The United States Air Force occupational research project, AFHRL-TR-73-75. Lackland AFB, Texas.
Fruchter, B., Morin, R. E., & Archer, W. B. (1963). Efficiency of the open ended inventory in eliciting task statements from job incumbents, PRL-TDR-63-8. Lackland AFB, Texas.
Graham-Moore, B. (1982). The comprehensive occupational data analysis programs (CODAP) and the occupational survey methodology (OSM): Inventory development manual. CCS No. 422, The Center for Cybernetic Studies, The University of Texas at Austin.
Lytle, C. W. (1954) Job Evaluation Methods (2nd ed.). New York, Ronald Press.
Mayo, C. (1969). Three studies of job inventory procedures, selecting duty categories, interviewing, and sampling, AFHRL-TR-69-32. Lackland AFB, Texas.
McCormick, E. J. & Ammerman, H. L. (1960). Development of worker activity check lists for use in occupational analysis, WADD-TR=60-77. Lackland AFB, Texas.
Morsh, J. E. & Archer, W. B. (1967). Procedural guide for conducting occupational surveys in the U. S. Air Force, PRL-TR-67-11. Lackland AFB, Texas,
Morsh, J. E., Madden, J. M. & Christal, R. E. (1961). Job analysis in the U. S. Air Force, WADD-TR-61-113. Lackland AFB, Texas,
1. Inventory Booklet