[an error occurred while processing this directive]
The Staff Selection Interview Schedule (SSIS):
A New Instrument to Select Postgraduate Nursing Students
AEJNE Volume - No.1 June, 2000.
Nick Santamaria, RN, RPN, B.App.Sc. (Adv Nsg), M Ed
St., Grad Dip Health Ed. PhD.
Lexie Clayton, RN, RM, BN, MHA, FRCNA.
This study describes the development and testing of a new nursing staff selection interview schedule (SSIS). The SSIS brings together in a single instrument elements of the Transformational Model for the Practice of Professional Nursing (Wolf, Boland & Aukerman, 1994), behavioural consistency approaches suggested by Schneider & Schmitt (1986) and contemporary staff selection theory and research.
The results of the reliability and validity analyses of the SSIS in selecting a group of registered nurses (n=80) applying for a collaborative university/hospital critical care course suggest that the SSIS is valid, reliable and stable over time. The approaches that have been used in developing the SSIS are believed to be generalisable to the development of other valid and reliable nursing staff selection instrumentation for situations where applicants have limited specific past experience for a position.
The selection of nursing staff intending to undertake a collaborative hospital/university Graduate Diploma in Nursing (Critical Care) course is a complex process. Careful consideration needs to be given to the process to ensure that it is valid, reliable and consistent with the goals of professional practice and the relevant equal employment opportunity (EEO) legislation.
As employees, postgraduate students hold concurrent roles, therefore the selection process needs to go beyond solely identifying the most clinically suitable nurses, it also needs to identify candidates with the greatest potential for academic success. To further add to the complexity of the task, the candidates for this position have, in most cases, never worked in critical care areas, yet they are personally and professionally oriented to this area of nursing practice.
The Graduate Diploma in Nursing (Critical Care) course has been conducted collaboratively between the hospital and the university for three years. The course is structured so that students are employed by the hospital for four days per week and attend lectures at the university one day per week. During their one year at the hospital, students undertake four clinical placements in critical care areas, ICU, Coronary Care, Cardiothoracics and Emergency Department.
The challenge facing the hospital and university is to implement a selection process, that identifies whom of the applicants for a position on the course has the greatest potential for clinical and academic success.
The collaborative nature of the course necessitates that both the university and the hospital undertake different stages of the selection process. Figure 1 presents the stages involved in the process.
As can be seen from Figure 1, candidates' applications go through a process that firstly establishes their suitability to the university and secondly the hospital. From the perspective of the hospital, candidatesí applications are given a rating score in relation to the length and type of clinical experience, continuing professional education activities which have been undertaken and evidence of clinical experience in high dependency nursing areas. At the completion of this stage, a number of candidates are invited to participate in an interview. This interview forms the final stage of the selection process and results in offers of employment being made to the successful candidates.
During the three years that the course has been conducted there had been a number of concerns expressed by senior nursing staff regarding the interview stage of the process. The concerns related to the validity of the interview, the questionable logic underpinning the structure of the questions and the less than ideal performance of some of the students who had been selected through the interview. The interview asked candidates to respond by describing how they would act in a number of hypothetical clinical scenarios drawn from nursing practice in the critical care areas. The underlying assumption being that the candidate whose response most closely corresponded with the views of the interview panel was the ìbestî candidate and therefore received the highest score. This approach had not been validated by research, rather, it was based on the historical practices of the hospital for job interviews.
As a result of the concerns which had been expressed it was decided to develop and test a new interview schedule, the SSIS, to be used in the selection of these postgraduate nursing students. There are three major differences between the SSIS and the previous interview format. The first is the greater level of structure of the SSIS, which is used to enhance its reliability and validity. The second is the use of the Transformational Model for the Practice of Professional Nursing (Wolf, Boland & Aukerman, 1994) as an organising framework for the SSIS questions, and the third difference was the approach of using behavioural consistency theory (Schneider & Schmitt, 1986) to assess candidatesí responses to the SSIS questions.
The following sections of this paper will describe the developmental process of the SSIS as it relates to current staff selection theory and the results of the reliability and validity testing which took place over a two year period.
Methodological Issues in the Structured Selection Interview and the Development of the Staff Selection Interview Schedule (SSIS)
The use of a structured interview has been widely accepted as a major component of staff selection. This review will focus on the areas where general agreement appears in the literature and will relate these areas to the decisions taken during the development of the SSIS.
Campion, et. al. (1997) describe the aims of structuring interviews as the pursuit of higher levels of reliability and validity and to assist the selection panel in making the best possible decision. The approach of basing questions on a job analysis is supported by the literature and appears logically appealing, although, Latham & Finnegan (1993) note that there are degrees to which an interview can be aligned to a job analysis depends on both the job description and candidate characteristics. Within the job-analysis approach, the use of critical incidents as a means of deriving questions for the interview is a commonly employed method (Campion et. al., 1994) which is thought to improve the validity of the interview by increasing its degree of structure. No studies are reported in the literature which investigate the effects of the job-analysis and critical incident approaches in the selection of nursing staff who will be undertaking postgraduate studies as part of their employment.
It was believed that using a strict job-analysis approach to develop interview questions was inappropriate for postgraduate candidates due to the multiple roles held by the students and their lack of specific critical care nursing experience.
A potential problem with not basing the interview questions on a job analysis was the reduction in the degree of structure of the interview with the attendant reduction in the reliability and validity of the selection process. Therefore, as a means of maintaining the structural properties of the SSIS and the desire to align it with professional nursing practice, the decision was taken to base the interview questions on the Transformational Model for the Practice of Professional Nursing (Wolf, Boland & Aukerman, 1994).
The SSIS questions were specifically developed from the 'professional growth' grid of the model by a group of nurse unit managers and clinical educators from the critical care areas. This decision was based on the belief that it was more important to select candidates with a strong orientation to, and evidence of, professional development, rather than to select on the basis of being able to respond to hypothetical, job analysis derived, scenarios. The advantage of structuring the SSIS in this manner was that it was possible to develop a clearly defined rating scale for each sub-dimension of the grid. The rating of candidatesí responses is strongly supported by Delery et. al. (1994) and Green et. al. (1993) as a means of increasing the reliability of the selection interview. A further advantage of using the model was that clear anchor points for the rating scales are described in the grid. The anchoring of ratings scales further adds to the structure of the interview and therefor increases the interrater reliability of the instrument (Vance et. al., 1978).
The type and quality of questions used in an interview can profoundly affect the degree of structure and consequently the reliability and validity of the selection process (Green et. al., 1993), (McDaniel et. al., 1994). The SSIS employs 12 questions organised into 6 pairs. Each pair of questions specifically relates to one of the six sub-dimensions of the grid. The first question seeks the candidateís personal beliefs regarding the particular sub-dimension and the second question asks for a specific example of how the candidates had operationalised in their past professional practice the belief they expressed in the first question. The use of past behaviour questions to increase the reliability of the interview is supported by Schneider & Schmitt (1986), Motowidlo et al. 1992, Green et. al. (1993) and Pulakos & Schmitt (1995). Mumford & Stokes (1992) note the positive effect on validity of past behaviour questions. Campion et. al. (1994) compared the validity of future (situational) questions to that of past (behavioural) questions and found increased validity from past questions, similarly, Pulakos & Schmitt (1995) found behavioural questions to have higher validity than situational questions.
To further enhance the structure of the SSIS, all candidates were asked exactly the same questions in the same order. This level of standardisation raises the levels of interrater and test-retest reliability (Huffcutt & Arthur, 1994) and simplified the rating of candidates. Standardisation also reduces EEO bias (Dipboye, 1994). Latham & Finnegan (1993) advise against too rigid an approach to asking questions because of the potential negative reactions from both candidates and interviewers.
Campion et. al. (1997) comment on the effects of the length of the interview on reliability and validity. There appears to be consensus in the literature that the length of interviews should be in the range of 30-60 minutes. The SSIS interview takes approximately 30 minutes to conduct. It was believed that this length was appropriate due to the preceding screening of applicants conducted firstly by the university and subsequently by the hospital prior to the interviews taking place. Tullar, Mullins & Caldwell (1979) suggest that longer interviews may be more appropriate for candidates with long experiential backgrounds. Marchese & Muchinsky (1993) caution against overly long interviews due to the possibility of overload and the negative effect on memory of the selection panel.
It was decided that candidate selection using the SSIS would be based on statistical rather than clinical prediction. This process involved summing of the scores for all questions to provide a total performance score for each candidate. Total scores for candidates were then compared and the highest scoring candidates, in descending order, were selected until all places on the course were filled. No weighting of individual questions for the sub-dimensions was used. Ancillary information was not included in the final selection of candidates because the short listing process that took place prior to interview reviewed ancillary information such as previous studies and clinical experience. The use of statistical prediction in the selection process is a contentious one in the literature. Conway et. al. (1995) reported higher reliability with mechanical (statistical) methods as opposed to subjective judgements, however, Pulakos et. al. (1996) found that consensus based ratings had similar reliabilities to solely statistical methods.
It was decided at this point that the performance, reliability and validity of SSIS should be investigated by using the SSIS for the selection of a group of postgraduate nursing students. The following describes this process and our findings.
A multiple method approach was employed to investigate the reliability and validity of the Staff Selection Interview Schedule (SSIS) over a period of two years (1996 & 1997).
Subjects and sampling
The subjects comprised 80 registered nurses applying for a Graduate Diploma in Nursing (Critical Care) Course being jointly conducted by a university and the hospital.
The study was conducted at a major metropolitan teaching hospital in Melbourne with a long history of providing post graduate critical care nursing education.
Instrumentation and measurement
The Staff Selection Interview Schedule (SSIS) comprises twelve questions arranged into six pairs. Each of the six pairs is derived from the major themes of the ìprofessional growthî grid of the transformational model. The first question in each pair requires the respondents to provide their interpretation, understanding and beliefs of the theme and the second question asks for an example of how they have operationalised this belief in their own nursing practice in the past. Each question is scored by comparing the response of the applicant to the four performance levels described in the modelís grid. These levels comprise, from low to high, reactive, responsive, proactive and high performance. Subjects were allocated a score of between one and four depending on the match between their response and the definitions provided on the grid. Employing this approach enabled a calculation to be made of the consistency between the subjectsí stated beliefs and their actual past performance. Measurement was undertaken at initial interview and again twelve months later by a team comprising the nurse unit managers from the critical care areas and the clinical educators responsible for the students during the course.
The internal reliability of the SSIS was determined by calculating item to total correlation and Chronbachís coefficient alpha according to methods defined by Anestasi (1988). The minimum correlational level to retain an item was set at 0.50. The stability over time of the SSIS was investigated by employing a test-retest procedure with a time interval of one year. Correlations were calculated for each SSIS item and for the SSIS total score. It was reasoned that if the SSIS items were meaningful and stable, then the performance of the subject in each of the six dimensions represented by the items and the SSIS totals should remain stable for each subject over time. Content validity was explored using a confirmatory factor analysis with an a priori specification of six factors according to Wolf et. al. (1994). The aim was to determine if the six factors contained in the grid were independent and that there was no overlap of content between factors.
The mean age of subjects was 26.3 years (range 24.2-41.7, SD 6.86). The relative youth of the cohort was believed to be generally representative of nurses applying for postgraduate critical care courses.
The mean scores for each SSIS item, SSIS total and standard deviations presented in table 1 demonstrate a general homogeneity of scores in each pair of questions.
To further investigate the internal reliability of the SSIS coefficient alpha was calculated and yielded a value of 0.76 (n=80).
The stability of the SSIS items and total score over time was explored by undertaking two measures from a sub-group of 20 subjects, the first at interview and the second following an interval of one year. It was reasoned that, if the SSIS was in fact measuring stable psychological dimensions, then there should be a statistically significant correlation between the two measurements.
To investigate the interrater reliability of the SSIS based interview the scores given by each interviewer for each candidate were analysed by calculating a correlation coefficient for each item and the results of the four interview teams compared. Table 4 presents the correlations between the ratings given for each item by each two-person interview team.
Table 5 reports the factor loadings for each SSIS item to the six sub-dimensions of the ìprofessional growthî grid of the Transformational Model for the Practice of Professional Nursing (Wolf, Boland & Aukerman, 1994). The results demonstrate that the items are mutually exclusive and that there was no overlap between factors.
Table 6 presents typical interviewer and candidate responses to the SSIS interview.
The findings of the reliability component of the study suggest that the SSIS has an acceptable level of reliability for an interview instrument. Of interest were the results of the descriptive statistics presented in table 1 which demonstrate the homogeneity of the mean scores for each item in the SSIS. Notable in this table was the generally lower score for each of the ìbehaviouralî items compared to the ìbeliefî items. The reason for this difference is not clear, however, it could be postulated that the subjects had to respond in more concrete terms to these items due to the verifiable nature of the questions, e.g. give me an example from your past clinical practice of how you have operationalised the belief you have told us about?
The corrected item to total correlation results presented in table 2 support the internal reliability of the SSIS. On the basis of these findings it was decided that each item should be retained apart from items 2a and 2b as none of the other items fell below the rejection threshold of 0.50. Similarly, the result of the calculation of coefficient alpha at 0.76 supported the internal reliability of the instrument.
The stability of the SSIS item and total scores over a period of one year further adds to the confidence in the measurement reliability of the SSIS. Once again, it is believed that further work is required with items 2a and 2b due to the lower correlational values reported in table 3. It is not clear if this finding is due to the performance of the SSIS or the possibility of measurement error during the second measurement. It may have been more accurate to measure subjects at six monthly intervals rather than one year. Using a more frequent, repeated-measures approach may have indicated emerging trends and would have allowed for a profile analysis to be undertaken.
The findings of the interrater reliability of the interview teams for each SSIS item presented in table 4 demonstrate a general consistency within teams, yet the results suggest that it may be better to reduce the number of interview teams and increase the number of members of the remaining teams. This approach may help to decrease the variance seen in table 4. Another factor that could be used in future to increase interrater reliability would be to devote a greater amount of time to training the interviewers in the use of the SSIS. In retrospect, it is believed that this issue may be the largest source of variance in the interrater reliability performance of the SSIS.
The factorial structure of the SSIS presented in table 5 supports the content validity of the six factors that were specified a priori according to Wolf et al (1994). The reported factor loadings for each factor pair appear quite strong, which suggest that it may be possible to add further items to the SSIS in each of the six sub-dimensions of this theme. By increasing the number of items it may be possible to further increase the structural level of the SSIS and to make the interview more flexible for a broader range of applicants without reducing its validity. It will be necessary to conduct further factorial investigations of the new items if this course is chosen in refining the SSIS in the future.
The responses of the users of the SSIS interview reported in table 6 suggest that interviewers found the SSIS useful in making objective comparisons between candidates as well as having a means of comparing the candidatesí responses with their past behaviour. An unexpected comment from some interviewers was that they perceived some of the successful candidates as being less team oriented than candidates in previous years. This opinion was also noted in discussion with some of the nurse unit managers of the critical care areas of the hospital. The reasons for these responses are not clear, yet it should be noted that the SSIS does not contain any items relating to the candidatesí teamwork orientation specifically. A possible explanation may be that the structure of the SSIS is such that it produces a selection bias toward candidates that demonstrate high motivation and evidence of personal professional development. It will be important to investigate this issue further to determine if the opinions of the interviewers and senior nurses relate only to the cohort selected in this study or if the SSIS as presently structured, does actually bias the selection process towards the 'individual' rather than the 'team member'.
The responses of the candidates to the SSIS were not unexpected due to the significantly different SSIS question content and format in comparison to those used in the past and by some of the other hospitals which collaborate with the university in offering this course. Nurses who are applying for the course often sound out past students in relation to the type of questions that they can expect at interview, therefore, it is not surprising that a number of candidates felt unprepared for the SSIS questions. Some candidates reported that they appreciated the opportunity to have their past achievements taken into consideration in the interview process.
The major limitations of this study are believed to relate to the small sample size and the lack of a comparative valid and reliable selection instrument. As a consequence the findings should be regarded cautiously at this stage. It is recommended further research be undertaken comparing the SSIS and other valid and reliable staff selection interview schedules and with other groups of nurses. Research should also be undertaken into the effects of expanding the number of items contained in the SSIS as well as exploring the issue of the team orientation of the successful SSIS candidates. Investigation should be conducted into the relationships between SSIS scores and the academic and clinical achievement levels of candidates.
In summary, it is believed that the SSIS, as presently structured, comprises a valid and reliable base from which to conduct further development and research. The use of past behaviour questions increases the reliability of the SSIS and is consistent with behavioural consistency theory and the axiom of past behaviour being a good predictor future behaviour. The fundamental question for selection panels that results from this approach is, what behaviour is important to predict? The SSIS deals with this question by framing each of the six question pairs on the dimensions of the ìprofessional growthî grid of the Transformational Model for the Professional Practice of Nursing (Wolf et. al., 1994). By employing this approach it is believed that it is possible to accurately select high quality candidates even though they may have limited specific experience of a position such as the postgraduate nursing student applying to undertake a critical care nursing course.
Acknowledgement: The authors wish to thank Karen Daws, Mandy Voss, Marie Gerdtz and Bernadette Schey from St Vincent's Hospital for their help in completing this research.
Anastasi, A. 1988 Psychological Testing. 6th Ed., Macmillan, New York.
Bartram, D. 1991 Addressing the abuse of psychological tests. Personnel Management, 23 (4), 34-39.
Campion, M.A., Campion, J.E., Hudson, J.P. 1994 Structured interviewing: A note on incremental validity and alternative question types. Journal of Applied Psychology, 79 (6), 998-1002
Campion, M.A., Palmer, D.K., & Campion, J.E. 1997 A review of structure in the selection interview. Personnel Psychology, 50 (3), 655-702.
Conway, J.M., Jako, R.A., Goodman, D.E. 1995 A meta-analysis of interrater and internal consistency reliability of selection interviews. Journal of Applied Psychology, 80 (5), 565-579.
Delery, J.E., Wright, P.M., McArthur, K., Anderson, D.C. 1994 Cognitive ability tests and the situational interview: A test of incremental validity. International Journal of Selection and Assessment, 2(1), 53-58.
Dipboye, R.L. 1994 Structured and unstructured selection interviews: Beyond the job-fit model. In Ferris, G.R. (Ed.), Research in personnel and human resource management: Vol. 12, 79-123. Greenwich, CTI JAI Press.
Gatewood, R.D., & Field, H.S. 1990 Human resource selection. Findlay, OH: Dryden Press.
Green, P.C., Alter, P., Carr, A.E. 1993 Development of standard anchors for scoring generic past-behaviour questions in structured interviews. International Journal of Selection and Assessment, 1 (2), 203-212.
Hicks, RE. 1991 Psychological testing in Australia. Asia Pacific Human Resource Management, 29(1), 94-101.
Huffcutt, A.L., Arthur, W. 1994 Hunter & Hunter (1984) revisited: Interview validity for entry-level jobs. Journal of Applied Psychology, 79(2), 184-190.
Latham, G.P., Finnegan, B.J. 1993 Percieved practicality of unstructured, patterned, and situational interviews. In Schuler H, Farr JL, Smith M (Eds.), Personnel selection and assessment: Individual and organizational perspectives. 41-55, Hillsdale, NJ: Lawarance Erlbaume.
McDaniel, M.A., Whetzel, D.L., Schmidt, F.L., Maurer, S.D. 1994 The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79 (2), 599-616.
Marchese, M.C., Muchinsky, P.M. 1993 The validity of the employment interview: A meta-analysis. International Journal of Selection and Assessment, 1 (3), 1-26.
Motowidlo, S.J., Carter, G.W., Dunnette, M.D., Tippins, N., Werner, S., Burnett, J.R., Vaughan, M.J. 1992 Studies of the structured behavioural interview. Journal of Applied Psychology, 77 (5), 571-587.
Mumford, M.D., Stokes, G.S. 1992 Developmental determinants of individual action: Theory and practice in applying background measures. In Dunnette, M.D., Hough, L.M. (Eds.), Handbook of Industrial and Organizational Psychology. Vol. 3 (2nd. Ed.) 61-138. Palo Alto CA. Consulting Psychologists Press.
Nankervis, A.R. et al. 1996 Strategic Human Resource Management 2nd Ed. Nelson, Melbourne
Pulakos, E.D., Schmitt, N. 1995 Experience-based and situational interview questions: Studies of validity. Personnel Psychology, 48, 289-308.
Schneider, B. & Schmitt, N. 1986 Staffing organizations. Glenview, IL. Foresman & Co.
Vance, R.J., Kuhnert, K.W., Farr, J.L. 1978 Interview judgements: Using external criteria to compare behavioural and graphic scale ratings. Organizational Behaviour and Human Performance, 22, 279-294.
Tullar, W.L., Mullins, T.W., Caldwell, S.A. 1979 Effects of interview length and applicant quality on interview decision time. Journal of Applied Psychology, 64 (6), 669-674.
Wolf, G.A., Boland, S. & Aukerman M. 1994. A transformational model for the practice of professional nursing. Part 1, The model. Journal of Nursing Administration, 24 (4), 51-57.
Wolf, G.A., Boland, S. & Aukerman M. 1994. A transformational model for the practice of professional nursing. Part 2, Implementation of the model. Journal of Nursing Administration, 24 (5), 38-46.
Last modified on: Monday, 16-May-2011 08:46:28 EST