服務滿意度調查最終報告 2009.pdf
UM USER SATISFACTION SURVEY 2009 | Final Report | October 13, 2009 Prepared by Angus Cheong UM User Satisfaction Survey 2009 Final Report A Collaboration Work by Project Consultant Dr. Angus Cheong Project Assistants Athena Seng Candy Fong Vicky Chan Project Leader Dr. Paul W. T. Poon (University Librarian) Facilitator Winnie Leung (University Library) & User Satisfaction Survey Working Team 2009 University of Macau Table of Contents Executive Summary ................................................................................................. 1 Introduction .............................................................................................................. 2 Methodology............................................................................................................. 3 I. Data Collection .....................................................................................................3 II. Sampling ...............................................................................................................3 III. Questionnaire .......................................................................................................3 IV. Scaling ...................................................................................................................4 V. Construction of Customer Satisfaction Index ...................................................5 Executive Summary The overall Customer Satisfaction Indexes (CSI) are constructed based on the four survey data, which are 70.6%, 71.9%, 69.8% and 70.1% in 2004, 2005, 2007 and 2009 respectively, indicating a small fluctuating pattern. Taking the CSI, overall satisfaction scores and specific figures of some units into consideration in the last four year surveys, the satisfaction level tends to getting stable for staff and higher for students in 2009. AHR is the most important factor that contributes to the CSI while IPR and AAO are the two least important factors in this regard in the staff sample. In the student sample, ICTO, SAS and REG are the three most important areas that contribute to the CSI while library is the least important factor. For staffs, about 83% of them claim that services meet or exceed their expectations in 2009, which is 2% point higher than that in 2007. For students, about 75% claim that services meet or exceed their expectations in 2009 which is 12% point higher than that in 2007. It shows a positive evaluation, especially a big increase in the student sample. Sixty-seven percent of the staff claim that they sometimes or always make recommendation while 33% of the students sometimes or always do so in 2009. There is a slight increase (2% point) for the staff sample from last year whereas a considerable jump was found for the student sample, accounting for a 7% point increase from last year. Seventy-five percent of the staff claim that the overall performance is improving which is 1% point less than that in 2007 while 46% of the students have the same opinion which is 1% point higher than that in 2007. Twenty-five percent of the staff and 31% of the students replied that they encountered a service problem in the past year. The problems mainly happen in the areas of classroom facilities, venue booking, procurement, air conditioning system, computer networking, and car-parking for the staff, whereas computer rooms/computers, library, enrollment are the main areas that students encounter problems. Services like cleaning, procurement, maintenance, computer support, and paying procedures/campus health care service are the top five that are suggested be improved by staff, while computer room service, library service, canteen service, sports complex venue rental service and E-purse value adding service are the most frequently mentioned services that need to be improved by students. 1 Introduction The University of Macau conducted user satisfaction surveys every two years since 2005 in order to collect opinions about the facilities and services provided by various administrative units from the entire University community. Identifying the problems, weakness, strength and importance in these services will help the University management to set a direction for future development and provide better services for the University community. The 2009 survey adopted the same approach as that used in 2004, 2005 and 2007. The current report includes the construction of a customer satisfaction index (CSI) for each survey in order to compare the performance in general over times. The following research questions were asked and answered so as to provide useful reference for decision-making by the university management. How much are the respondents satisfied with the overall performance by the administrative units? How do the respondents rate the performance by each of the administrative unit? What are the concerns by the respondents? What are the users’ suggestions to or opinions about the services? How does the users’ satisfaction change over times? The structure of this report is divided into six parts: Executive Summary, Introduction, Methodology, Survey Results, Conclusion and Recommendations, and Appendices. A more detailed Literature Review on user satisfaction survey can be found in the 2004 report. 2 Methodology I. Data Collection The 2009 survey adopted three kinds of data collection methods. For the staff sample, we mainly used online survey and supplemented by paper-pencil questionnaire. For the student sample, we interviewed students by telephone. II. Sampling For obtaining a representative sample, we conducted a census-like sampling of the staff in which each member of our staff received a standardized questionnaire by online, distribution and emailing; and we used a random sampling technique for drawing a sample for telephone interviews with all registered students. The telephone survey was conducted between April 27 and April 30, 2009 while the staff survey was conducted between April 27and June 21, 2009. Twenty-two UM students were trained to conduct interviews, to exercise supervision, and to perform data-input tasks. The sampling results are listed as follows. 1. Staff Sample A total of 904 staff was informed to complete the online survey at the first stage and to complete the email and paper-pencil surveys at the second stage. A total of 459 completed questionnaires were returned, among which 408 were from online survey and 51 from paper-pencil surveys, counting an overall return rate of 50.8% which is lower than that of the 2007 survey (60.4%). The return rate from the administration units is 63.6%, whereas the return rate from the academic and research unit is 35.7%. Among all the 21 units, the highest return rate is 100% and the lowest is 28.2%. The sampling error is 3.21% at the 95% confidence level. 2. Student Sample A total of 800 students were randomly selected from the total of 6289 active students of the University. By using the Computer-Assisted Telephone Interviewing (CATI) system, we contacted 665 students while 135 were not available to be interviewed due to busy line, not being at home and other technical reasons. In the end, 603 were successfully interviewed, counting a very high response rate of 90.7%. The sampling error is 3.8% at the 95% confidence level. III. Questionnaire The same questionnaires were adopted as that of the year 2007 survey except for a few wording changes and adding and deleting of some service items by some units (Refer to details in the appendix) 3 IV. Scaling The ten-point scale For the satisfaction and performance rating question, we adopted the ten-point scale for several reasons. 1. The ten-point scale is preferred because it can reflect incremental changes over time when used repeatedly, and it can reflect the extent of progress in reaching service targets (Hernon & Whitman, 2001). 2. The ten-point scale is easily understood and avoids a numeric midpoint while a 5-point or 7-point scale offers a midpoint which would allow the respondent to avoid answering the question. 3. The 10-point scale can help to measure whether the user is more or less satisfied, in however small degree. The labels at each end can denote the extreme limits of dissatisfaction and satisfaction, respectively. The following illustration shows the interpretation of such scaling and the average scores from the sample. Question: What is your overall level of satisfaction with all services provided by various administrative units of UM? [1] Lowest [2 3 4] [5] [6] [7 8 9] [10] Highest Scores of 1 and 10 are extreme, few people probably choose either of these scores. Scores of [5 6] indicate only slight dissatisfaction or satisfaction; however, selecting the 5 or 6 forces an inclination in one direction or the other. The [2 3 4] and [7 8 9] ranges indicate dissatisfaction and satisfaction, respectively. Most people will respond in these ranges. [7 8 9] grouping offers the respondent a way to fine-tune a non-extreme score. That is, a score of 7 indicates moderate satisfaction and signals that there is room for improvement without expressing actual dissatisfaction. The same reason applies to [2 3 4 ] grouping. An average score of at least 8 is very good, whereas people who score a 7 are indicating that they are not exactly dissatisfied, but that they are near the lowest range of satisfaction. Scores below a 7 should be a cause of concern, but of greatest and most immediate concern are those who score in the 1 to 4 range. These responses are clearly signaling certain dissatisfaction. Imagine that the lower the score, the louder the voice of dissatisfaction. Another type of significant questions is the users’ expectations score: Please indicate whether our service fall short of, exactly meet, or exceed your expectations. 4 -3 -2 -1 0 1 2 3 Completely Fall Short of Expectation Somewhat Fall Short of Expectation Slightly Fall Short of Expectation Exactly Meet Expectations Slightly Exceed Expectations Somewhat Exceed Expectations Completely Exceed Expectations A score of 0 would mean that expectations were exactly met—nothing more, nothing less. Scores higher than 0 would indicate that service exceeds the users’ expectations while scores below 0 indicate that the users’ expectations are not being met. The latter would imply that a problem or misunderstanding should be identified and corrected. A recommendation question was also used to tap whether the users would recommend the service to others using a scale of 1=Never, 2=Seldom, 3=Sometimes, and 4=Always: How often do you praise/recommend UM’s administrative services to others? V. Construction of Customer Satisfaction Index In customer satisfaction research, two approaches are commonly used for calculating the customer satisfaction index (CSI): stated- importance and derived- importance approaches. The stated- importance approach uses both stated importance and performance scores in constructing the CSI, while the derived-importance approach uses regression analysis to derive betas for calculating CSI (Chu 2002; Hill, et al., 2003). Both approaches have their strength and weakness. Considering the advantage of using the shorten version of questionnaires, the stability of statistical measure of the impact of attributes on overall customer satisfaction, and the superior power of prediction and explanation of the derived-importance approach to stated-importance approach (Chu 2002), we adopt the derived- importance approach in this project. As illustrated in Table 1 below, regression analysis is first run on overall satisfaction that is dependent on the attributes, the specific administrative units in our case, to produce the relative impacts of each attributes. The beta score of each attribute (column 1) is listed in column 2. Second, a beta weight of each attribute is calculated by the beta score divided by the sum of all beta scores (column 3). Third, a mean score is computed for each attribute from the respondents’ evaluation score of the performance of that attribute (column 4). Fourth, a satisfaction weight is calculated by multiplying the beta weight with the mean score (column 5). Summing up the figures in column 6 produces an overall customer satisfaction index (column 6). 5 Table 1 An illustration of derived-importance approach to CSI (modeling results) Attribute (1) importance beta weight mean score of satisfaction score(beta) (%) (3) CSI (6) satisfaction (4) weight (5) (2) AAO 0.27 0.3375 6.9 2.32875 AHR 0.18 0.225 7.1 1.5975 FO 0.16 0.2 6.9 1.38 CMO 0.13 0.1625 7.3 1.18625 PUB 0 0 7 0 Library 0.19 0.2375 7.3 1.73375 ICTO 0 0 7 0 IPR 0.13 0.1625 6.9 1.12125 Faculty Office 0 0 7.3 0 Total 0.8 8.226 (82.26%) The CSI score varies from 0 to 100 by transforming the original sum of the satisfaction weight which ranges from 0 to 10. Because of the customer response ranging from 0 to 10, a score of 80 roughly translates into to an average customer response of “8”. Such approach is more stable than simply looking at the responses to a single overall satisfaction question as an index is less affected when a customer misunderstands one question. The satisfaction weights in column 5 tell each attribute’s relative contribution to the total satisfaction index score. For example, AAO receives a satisfaction weight of 2.32875, indicating that it is the most important area among others that affects the change of the satisfaction index. The attribute carrying a high beta weight with a low mean score of satisfaction means is the one needs to be addressed and studied carefully. 6

服務滿意度調查最終報告 2009.pdf 




