Designing Assessment and Measuring Longitudinal Student Performance in the Management Program

Abstract

The School of Business at Western New Mexico University (WNMU-SB) has a diverse population of approximately 250 undergraduate students in Silver City, New Mexico, USA. All four undergraduate degrees at WNMU-SB are accredited by the Accreditation Council of Business Schools and Programs (ACBSP). With the implementation of a systematic collection of data through the Canvas Learning Management System (CLMS), the WNMU-SB has been able to collect assessment data on Program Learning Outcomes (PLOs) for all majors (Accounting, General Business, Management and Marketing). By using an integrative approach to knowledge building, WNMU-SB was able to develop program learning outcomes (PLOs) for its four programs, develop learning activities in alignment with the PLOs, and assess student performance over time. The purpose of this study is to statistically assess the PLOs developed for the Management program. We present the results from a sample of 82 full-time students and measure longitudinal student performance in Management core courses and the capstone course for an extent of 5 years. Additionally, the data is used to comprehend the relationship between curriculum mapping, student performance, and assessment instruments.

Introduction

The School of Business at Western New Mexico University (WNMU-SB) has a diverse population of approximately 250 undergraduate students in Silver City, New Mexico, USA. With the implementation of a systematic collection of data through the Canvas Learning Management System (CLMS), the WNMU-SB has been able to collect assessment data on Program Learning Outcomes (PLOs) for all majors (Accounting, General Business, Management and Marketing). The data is used to comprehend the relationship between curriculum mapping, student performance, and assessment instruments. Five key areas of proficiency were identified by the full-time faculty, as essential for graduates of the School of Business: understand foundational concepts, use data to make decisions, synthetize information, apply ethics and communicate effectively. Following the identification of these proficiencies, the faculty articulated learning goals for each program. Specifically, for the students majoring in the Management program, this common set of PLOs provides a broader picture of the students’ skills needed in their professional careers.

WNMU-SB’s systematic processes are rooted in the Accreditation Council of Business Schools and Programs (ACBSP) quality standards and criteria. “Learning is an activity directed towards improving personal knowledge. Knowledge building is trying to improve knowledge itself by considering ideas in regard to their strengths, weaknesses, applications, limitations, and potential for further development” (Lamon, Reeve, & Scardamalia, 2001). In building knowledge, it is imperative to articulate learning goals, design analyses to track and capture activities, and create curriculum maps that evolve with the results. WNMU-SB’s systematic processes include an overarching strategic plan, overall program learning outcomes, program-specific learning outcomes, program-specific curriculum mapping and semiannual assessment reports. By using an integrative approach to knowledge building, the WNMU-SB is able to develop outcomes, develop learning activities, and assess student performance over time. 

The purpose of this study is to describe and test the PLOs developed by WNMU-SB full-time faculty for the Management program. We will present the results from the Management program and statistically measure longitudinal student performance. The study uses Jarchow, Formisano, Shane, & Sayre (2017) as our base model to evaluate the Management PLOs over time. To achieve this, we measured the average student performance in a group of core courses as well as the capstone course for 5 years in five Management PLOs. 

In the second section of this paper, we summarize relevant literature on business schools’ specific issues (such as accreditation and stakeholders’ input), performance assessment design, pre-tests and post-tests, and competencies. The third section includes a description of processes at WNMU-SB, such as the curriculum mapping process and program learning outcomes creation. The fourth section presents the methodology, empirical model and analysis based on students’ performance as per data obtained. Finally, the study will provide conclusions and recommendations. 

Literature Review

Business accredited programs are diverse and do not offer a particular “one size fits all” performance assessment methodology, particularly those accredited by the ACBSP. In order to achieve ACBSP accreditation, business schools follow rigorous standards that allow certain degrees of freedom for the schools to determine their own assessments. ACBSP is a voluntary accreditation, as defined in Oedekoven (2019). In exploring the differences between voluntary and required accreditations, we find multiples ways to assess performance. For example, Terry Orr, Hollingworth, and Beaudin (2019), assessed state specific results for licensures. They evaluated candidate’s preparedness for leadership licensure and initial school leadership positions. To increase quality and candidate performance, they found that in the field of education and licensure, it is important to have clear instructions, policies, and assessment standards, and the willingness to drop unprepared candidates. The majority of business schools have voluntary accreditations. 

It has been noticed in the literature that assessment is not a new concept; however, there have been a few attempts to assess some specific skills in certain undergraduate degrees. There are common concepts, rubrics, and outcome assessment methodologies, but there are not definite models that fit all programs. A common concept in accredited business schools is “closing the loop” with assessment. The concept requires time and consistency in the programs implementing it. “The increasing popularity of the assessment of learning outcomes process is viewed as highly positive because it can be considered as best-practices in higher education” (Reich, Collins, DeFranco, & Pieper, 2019). Reich et.al. (2019) considered the timeframes necessary for the entire process to provide results. From the design of the tools, data collection, analysis, and discussion of findings, the authors provide a roadmap of best practices but not a statistical model. 

Another important concept in accredited programs is real-life experience opportunities (internships) provided to students and its assessment. These experiences shape and develop competences necessary for the workplace and are in demand by employers. “Internships are presented as a unique opportunity to improve students’ competences and career prospects” (Ferreras-Garcia, Sales-Zaguirre, & Enric, 2020). In a sense, business schools have to balance the need to include external stakeholders’ input (i.e. employers) into their programs, while at the same time enhance internal stakeholders’ requirements (i.e. academics). It is important that students perform their best while in our programs; however, “the assumption was made that in today’s ‘user pays’ environment, the business reality for education is that individual product demand is largely price driven and the overall market is one governed by commercial instincts, rather than by a variety of other altruistic imperatives, e.g. cognitive pleasure” (Cotton, 2001). This adds other complexities to the business schools’ value proposition and business model, making it a balancing act between how much and how far the school needs to go to be successful. Performance based assessments offer guidelines to design quality programs that are priced right and that provide the expected return on investment. 

Other fields, such as journalism, have also invested in so-called “authentic assessment” (Perruso Brown & Kingsley-Wilson, 2010) and designed tools to address pre and post-knowledge of the subject matter. At the micro level (course level), Perruso et.al. developed assignments that met the requirements of information literacy skills. D’Angelo (2001) discussed the importance of micro level assessments on the grounds of adult career development. However, the design of their pre-test and post-test was not created for quantitative analysis, resulting in no correlation in changes made between one and another. “Although a direct correlation cannot be made between the modifications to the assignment and revised class session, these changes appeared to have been successful in allowing students to better grasp and apply the concepts of information literacy and the information search process” (D’Angelo, 2001). 

In addition, specific competences and skills have been individually assessed by Ding and Ma (2011), Calma (2013), and Geithner and Pollastro (2015). All studies determined deficiencies in the individual skills or competences, but offer recommendations to improve the assessment instruments, such as including rubrics into the assessment and the embedding of the skills in discipline-specific teaching. “While designing learning outcomes (course, skills or programs), it is important to remember that active learning methods and not lecture based instructions are growing in importance because they are results oriented” (Hallinger & Lu, 2013). 

Calma (2016) indicates the challenges universities have in developing skills through the curriculum. In addition, he indicates that it is necessary to plan for the future by incorporating feedback from various stakeholders in the university programs. He points out the importance of meeting certain outcomes and address them by examining the skills they need in their professional careers. Choudhury (2012) also pointed out the issues of quality in curriculum design, delivery, course assessment and learning. Choudhury advocated for rubrics as a model of standardized grading to identify learning dimensions. 

Finally, Jarchow et.al. (2017) chose initial courses in a sustainable major and the capstone as a baseline and measure of progress, respectively. Their study found improvement in all measures of student learning outcomes. The authors mention the difficulties assessing “how much” is sufficient improvement in learning since “there are no universal standards for learning in higher education” (Jarchow, Formisano, Shane, & Sayre, 2017). Their results on a young program were overall positive and the sample was limited to 18 students. However, the integrative model was one of consensus in the sustainability program. 

The studies mentioned explain the importance of expanding the knowledge in quantitative assessment. Based on the literature review, the WNMU-SB addresses not only the program learning outcomes creation from a holistic perspective, but also includes stakeholders’ feedback through their strategic planning process and full-time faculty collaboration. This study contributes to the existing literature by developing a quantitative longitudinal assessment of student learning and performance in the Management program over a period of five years on five programs learning outcomes (PLOs).  

Developing Program Learning Outcomes at WNMU-SB

All four undergraduate bachelor’s degrees at WNMU-SB are accredited by the Accreditation Council of Business Schools and Programs (ACBSP). ACBSP relies on the quality of teaching to assess curricula performance and award accreditation. ACBSP is a voluntary accreditation, as described by Oedekoven (2019), mentioned in the literature review section. Voluntary accreditation allows business schools the option to seek the accreditation, opposed to compliance-based accreditation that entails permits, licenses and specific rules and regulations. The goal for all business schools is to create quality and value for their higher education institutions and to focus on the educational experience for their target markets. Usually this experience results in
customized programs. 

“At its core, voluntary accreditation provides you with well-defined processes that you use to close the gap between aspiration and reality relative to your educational experience. Voluntary education is a way to elevate your quality, align goals with results, and ensure that you are meeting your delivery expectations” (Oedekoven, Olin O., 2019). Understanding how to assess learning and knowledge building is the base for a structured program that meets learners’ and educators’ expectations. The tools used at WNMU-SB include pre-tests and post-tests, exit exams, and learning outcomes. In order to be useful, learning and knowledge building needs to be assessed for students during their tenure in the programs. WNMU’s Management degree has been pioneering the assessment process since 2014. 

Assessment of student learning takes place throughout the entire program, and in all business courses. The School of Business faculty provide internal and formative assessment information through the Canvas Learning Management System (CLMS). The faculty is responsible for designing assessment instruments, keeping records of assessment, and making changes to their courses. Program level assessment data is gathered at summative points in the curriculum (Core and Capstone courses). These summative points have been identified through the exercise of curriculum mapping. This process is internal, and it is discussed on a constant basis in the WNMU-SB, with the goal of keeping it relevant and updated. “A common issue with assessment in higher education is that the approach taken at the university and program level is often fragmented” (Gilbert & Oedekoven, 2015). The disconnection between the parts, as explained by Gilbert and Oedekoven (2015), includes inappropriate use of technology, lack of training, and isolation of the stakeholders. By not sharing the data across the organization, roles get erased, responsibilities diminished, and improvement is secluded. We believe that by statistically analyzing and sharing the results from WNMU-SB’s constant and systematic collection of repeatable data will fundamentally increase the interest and “buy in” of the faculty. 

The integrative approach at the WNMU-SB includes input from stakeholders, constant communication of results and collection of feedback. “Good curriculum design includes articulating measurable student learning outcomes (SLOs) and designing the pedagogical methods needed to help students learn those SLOs” (Fink, 2013). In the summer of 2014, full-time faculty drafted overarching Program Learning Outcomes (PLOs) for all WNMU-SB programs. At this time, the Management program embraced PLOs and started its implementation in Spring 2015. In the fall of 2016, as part of the development of a five-year strategic plan, the overarching PLOs were modified to address specific key competences identified by stakeholders of the School of Business. WNMU-SB assesses stakeholders through periodic surveys distributed electronically. Each program subsequently developed their variants from the overarching PLOs. All PLOs were approved unanimously in October 2016 by the full-time faculty in the School of Business. 

Each business major in the School of Business has parallel requirements including general education, supporting coursework, core requirements and concentration courses adding up to a total of 120 credits. The Management degree is comprised of 31 general education required credits, 6 BS/BBA required credits, 15-16 credits in supporting coursework, 24 business administration core credits, 24 management concentration credits, 6 credits in upper division electives and 14 additional credit hours. The Management major shares the 8 business administration core courses with Accounting, General Business and Marketing, including the capstone course. Additionally, all degrees share 2 business specific supporting courses that could be cataloged as core courses (Microeconomics and Macroeconomics). As described in WNMU’s 2020-2021 Catalog, the Management degree has 8 concentration courses (see Appendix 1). All the coursework listed on Appendix 1, assess several PLOs for all degrees, including the Management PLOs.  

The Management program PLOs are listed below. These PLOs are structurally included in course-specific assignments and rubrics and identified through curriculum mapping exercises with the full-time faculty. All five PLOs are direct, formative, internal instruments that use comparative data collected through CLMS.

PLO1. Implement foundational concepts of management and explain management roles, i.e. setting goals, objectives, and strategies to accomplish a purpose. 

PLO2. Analyze, interpret, and synthesize data to make managerial decisions.

PLO3. Synthesize information from applicable disciplines into management concepts.

PLO4. Apply management ethics and demonstrate understanding of corporate social responsibility, in the context of a diverse, global/multi-cultural business environment.

PLO5. Exhibit effective oral and written communication skills related to management activities.

The curriculum mapping process (for all programs) requires each faculty member to identify which PLOs could be appropriately assessed in their classes (see Appendix 2). Then the faculty identifies learning activities (assignments, test content, course projects, etc.) to serve as a proxy measure of competency for each PLO. Like a roadmap to graduation, the faculty develops and follows the curriculum map to design their courses, create rubrics and incorporate PLOs within rubrics. In following the process, the CLMS automatically collects and accumulates the students’ performance data. It important to mention that the overarching WNMU-SB PLOs are unique to the department. Equally unique are the PLOs for each program (Accounting, Management, Marketing and General Business). That said, every PLO uses the same a five-point Likert scale and is based on the same overarching concepts. By using the five-point Likert scale, the faculty member assigns a rating to each student’s performance on that activity/outcome:

  1. “Performs significantly below expectations”
  2. “Performs below expectations”
  3. “Meets expectations”
  4. “Meets expectations above average’
  5. “Exceeds expectations well above average”

Mean ratings are interpreted as a metric of successful instruction for each semester cohort on that PLO competency. This rating process is performed independently of the assignment grade. At the end of each semester, students’ scores are collected, processed and distributed for review by all faculty and summarized for accreditation documentation and stakeholders’ feedback. Reports are created and information is shared through the university and WNMU-SB’s webpage. 

The assessment of specific PLOs allows the Management faculty to assess strengths and deficiencies in course content using inferences about the distribution of students who meet, fail to meet, or exceed expectations for each PLO. This information provides the instructor(s) with feedback on ways to improve assignments and PLO-related course content over time. The program assessment cycle and the level of rigor to which students are subject to assessment of the Program Learning Outcomes (PLOs) follows the set course rotations. 

The curriculum map takes into consideration classes that are taught at an Introductory level (100 and 200 level), others at a Basic level (200 and 300 level), and the upper division (or specialization classes) at an Advanced level (400 and 500 level) (see Appendix 2). Specifically, in the Management program, PLO assessment occurs at the sophomore (200 level) and above. By using this information, we were able to separate the group of core courses specific to the Management program. As mentioned before, assessment reports are completed at the end of each semester (using data gathered in the previous semester), presented to the faculty and systematically saved to follow ACBSP standards. The outcomes are ultimately assessed by the student’s performance throughout their stay within the program.

Furthermore, the learning outcomes reflect the content of the Management courses. The Management program provides students with opportunities to expand managerial and business-related skills and knowledge, by using the mentioned holistic approach and by “closing the loop” with assessment. The classes allow students to experience the operations of a business and encourage and provide opportunities to interact with local businesses and organizations.

Statistical Model and Results

To measure longitudinal student performance, we identified the students who have taken the baseline (core) courses as identified by the degree plan and curriculum map (Appendix 2). The students are to take the core courses during their sophomore, junior, and some in their senior year. Each core course measures management specific PLOs (see Appendix 1). Since there are multiple assessments in the courses, and over time, we used the average score for each individual PLO as our dependent criterion. Although the specific assessments could have been modified in some classes, the PLOs did not change over time. We connected the data to each student for each one of the courses, over the span of five years, and calculated their individual PLO average score. For the core courses, a total of 3,906 data points were used to calculate averages of 82 full-time students in five PLOs. 

Since all Management PLOs are assessed in the capstone course, BSAD 497, and all students will take this class in their senior year before graduating, we were able to match the students with their performance at that time. For the capstone course, a total of 5,013 data points were used to calculate averages of the same 82 full-time students in five PLOs. Using the averages for each one of the five PLOs at the core courses level and the capstone course, per student, we were able to measure longitudinal performance. 

We paired students to answer the question “Is there a difference in overall performance on each PLO score between the core and the capstone courses in the Management program?” We used the scores at the core and capstone courses as our dependent variable and the students as our independent variable. We chose to use the Paired sample T-test model (Cohen, 1988) to find out if there was a significant difference in students’ performance between the core and the capstone averages. We determined that this model was more robust and effective in finding a true medium effect than other models such as the analysis of variances (ANOVA). Based on the empirical estimations, the results for each one of the PLOs are: 

1. Is there a difference in overall performance on PLO 1 scores between the core and the
capstone courses?

Results of the Paired sample T-test showed that the mean difference in overall performance on PLO 1 scores between the core and the capstone courses [Mean difference = .20, SD= 1.35 95% CI (-.49 to .10)] was not statistically significant at the .05 level significance (t=-1.31, df= 81 and p=.193). The effect size (d=-.15) was small based on Cohen’s (1988) guidelines. The null hypothesis, which suggested that there was no significant difference in overall performance on PLO 1 scores between the core and the capstone, is retained.

2. Is there difference in overall performance on PLO 2 scores between the core and the
capstone courses?

Results of the Paired sample T-test showed that the mean difference in overall performance on PLO 2 scores between the core and the capstone courses [Mean difference = -.29, SD= .92 95% CI (-.49 to -.09)] was statistically significant at the .05 level significance (t=-2.88, df= 81 and p=.005). The effect size (d=-.32) was small based on Cohen’s (1988) guidelines. The null hypothesis, which suggested that there was no significant difference in overall performance on PLO 2 scores between the core and the capstone, is rejected.

3. Is there a difference in overall performance on PLO 3 scores between the core and the capstone courses?

Results of the Paired sample T-test showed that the mean difference in overall performance on PLO 3 scores between the core and the capstone courses [Mean difference = -.28, SD= 1.05 95% CI (-.51 to -.05)] was statistically significant at the .05 level significance (t=-2.40, df= 81 and p=.019). The effect size (d=-.27) was small based on Cohen’s (1988) guidelines. The null hypothesis, which suggested that there was no significant difference in overall performance on PLO 3 scores between the core and the capstone, is rejected.

4. Is there a difference in overall performance on PLO 4 scores between the core and the capstone courses?

Results of the Paired sample T-test showed that the mean difference in overall performance on PLO 4 scores between the core and the capstone courses [Mean difference = -.70, SD= 1.97 95% CI (-1.13 to -.26)] was statistically significant at the .05 level significance (t=-3.19, df= 81 and p=.002). The effect size (d=-.35) was small based on Cohen’s (1988) guidelines. The null hypothesis, which suggested that there was no significant difference in overall performance on PLO 4 scores between the core and the capstone, is rejected.

5. Is there a difference in overall performance on PLO 5 scores between the core and the capstone courses?

Results of the Paired sample T-test showed that the mean difference in overall performance on PLO 5 scores between the core and the capstone courses [Mean difference = .10415, SD= .97 95% CI (-.11 to .32)] was not statistically significant at the .05 level significance (t=-.974, df= 81 and p=.333). The effect size (d=.11) was small based on Cohen’s (1988) guidelines. The null hypothesis, which suggested that there was no significant difference in overall performance on PLO 5 scores between the core and the capstone, is retained.

Conclusions and Recommendations

This study demonstrates a significant difference in average performance (positive student performance) over the course of the Management degree in three areas: Analyze, interpret, and synthesize data to make managerial decisions; Synthesize information from applicable disciplines into management concepts; Apply management ethics and demonstrate understanding of corporate social responsibility in the context of a diverse, global/multi-cultural business environment (PLOs 2, 3, and 4). The null hypothesis is rejected for PLOs 2, 3, and 4. However, there was no evidence of significant difference in average performance (positive student performance) in PLOs 1 (Implement foundational concepts of management and explain management roles, i.e. setting goals, objectives, and strategies to accomplish a purpose) and 5 (Exhibit effective oral and written communication skills related to management activities). Therefore, the null hypothesis is accepted in PLOs 1 and 5. 

We believe that the sample size over five years (10 semesters) is appropriate and by using the average PLOs, increases consistency among the student data and helps us establish the differences in assessments between core courses and the capstone. Moreover, the commitment of the faculty teaching the management courses and incorporating the PLOs through the curriculum gives us confidence in the accuracy of the scores over time. 

Specifically, for the PLOs with statistically significant differences between core and capstone course average scores (2, 3, and 4), we could infer that the core courses in general and in the Management concentration successfully integrated and emphasized the following areas: analyzing, interpreting, and synthesizing data, synthesizing information from applicable disciplines, and applying management ethics and demonstrating understanding of corporate social responsibility. Consequently, the curriculum map shows the higher concentration of assessments and higher level of teaching (basic and advanced) in the concentration classes with PLOs 2, 3, and 4. Furthermore, the concentration courses seem to design assessments based on higher Bloom’s taxonomies (case analysis, discussion, and practice). 

For the PLOs with no statistically significant difference between core and capstone course average scores (1 and 5), we could infer that the core courses in general and in the management concentration fell short of integrating and emphasizing the areas of implementation of foundational concepts of management and explaining management roles and exhibiting effective oral and written communication skills related to management activities. After analyzing the statistical results and revising the curriculum map in columns PLO1 and PLO5, we can see that the PLO1 fell short in number of assessments, levels of teaching (mostly introductory) and Bloom’s taxonomies. When analyzed at the micro level, content for PLO1, foundational concepts of management, is taught in the Introduction to Business classes (BUSA 1110), which is not reflected as part of the core courses. 

As per PLO5, it is not uncommon to find students struggling with communication skills (soft skills) and although the levels and taxonomies for the PLO5 seem to mirror others, we can identify only one specific course, BSAD 355 (Communications in Business and Industry), addressing communication skills. The course BSAD 355 is a requirement only in the Management program, and an elective in the other programs. BSAD 355 shows only one assessment overall for PLO5. Finally, other Management concentration courses with gaps in their assessment, such as MGMT 2110, MGMT 454, and BSAD 441, do not address PLO5. These factors may contribute to the lack of student overall average performance improvement in PLO5. 

The statistical model supports the importance of using an overarching approach in designing assessment measures, mapping curricula, and constant collection of data. Based on the statistical results and analysis of the data, addressing the learning and assessment gaps in BUSA 1110, BSAD 355, MGMT 2110, MGMT 454, and BSAD 441, through curriculum mapping exercises, assessment creation and constant collection of data, could result in positive changes in student performance. These changes could result in overall improvements in the Management program assessment and in “closing the loop”. The WNMU-SB will need to continue assessments and statistically evaluate the model, after changes are implemented and once enough data is collected. Furthermore, the strong integration of the holistic assessment practice in the WNMU-SB doesn’t exclude the assessment from logical reasoning errors (traps), common in assessment activities. Future papers should address these errors in order to improve the holistic approach. 

As established in the curriculum map, specific program goals are achieved through the delivery of course material in the classes and the documentation of student performance/improvement are systematically collected through the learning management system. However, improvements to the current study can be done by analyzing components at the micro levels such as rubrics, experiential learning effects, and supplemental tools over time. It is important to mention that, while the WNMU-SB PLOs were designed using a well-documented systematic process, degrees of subjectivity are present. The Likert scale used to evaluate activities/outcomes will always have a component of reliability and validity with respect to prejudiced data. Despite the fact the scales and outcomes are determined by the consensus of the WNMU-SB faculty, the development assessment measures are a concern that could be addressed in future papers. As per the micro level components, measurements of fidelity can be implemented and analyzed over time. Future research may also include microanalysis of the longitudinal student performance and internal comparison of data between programs (Accounting, Marketing, and General Business).

Appendixes

Appendix 1

Every Management major is required to take the supporting coursework, core business courses and management concentration courses. The full-time students (like our sample) take these courses depending on their standing in the program. All business majors take the supporting coursework and core courses that include the capstone (BSAD 497), but not every student take the Management concentration courses. 

Table 1. Courses assessing management program learning outcomes

Supporting business courseworkCourse numberCreditsStudent standing
Macroeconomics PrinciplesECON 21103Freshman
Microeconomics PrinciplesECON 21203Freshman
Total
6
Business Core CoursesCourse NumberCredits
Principles of Accounting IACCT 21103Sophomore
Principles of Managerial AccountingACCT 21203Sophomore
Principles of FinanceBFIN 21203Sophomore
Business Law IBLAW 21103Sophomore
Principles of ManagementMGMT 21103Sophomore
Principles of MarketingMKTG 21103Sophomore
Human Resources ManagementMGMT 3323Sophomore
Business Policies and ManagementBSAD 4973Senior
Total 
24
Management ConcentrationCourse NumberCredits
Communications in Business and IndustryBSAD 3553Junior/Senior
Business ResearchBSAD 4413Junior/Senior
International BusinessBSAD 4863Junior/Senior
Labor Economics /or / Applied Business EconomicsECON 350/3703Junior/Senior
Intermediate Finance for ManagersFINC 4713Junior/Senior
Organizational BehaviorMGMT 4523Junior/Senior
Decision Making in Environmental ManagementMGMT 4543Junior/Senior
Operations ManagementMGMT 4613Junior/Senior
Total 
24

Appendix 2

Curriculum mapping is an exercise for full-time faculty in which we identify and incorporate the assessments related to the Program Learning Outcomes. By discussing as a team stakeholders’ input, analysis of performance, information sharing and interdepartmental collaborations we develop the best assessment(s) possible for each program. Every program (Accounting, Management, Marketing and General Business) has its own curriculum map. All maps are created using the same process. Some courses share interdepartmental PLOs’, for example finance courses overlaps with accounting concepts and these concepts are measured by including accounting PLOs’ into the finance homework rubrics. 

Table 2. Curriculum mapping management program

CoursesPLO 1PLO 2PLO 3PLO 4PLO 5
Core Requirements 




ACCT 2110Assignment
ACCT 2120Assignment
BLAW 2120
MKTG 2110AssignmentAssignmentAssignmentAssignmentDiscussion
MGMT 2110EssayEssay
BFIN 2110Budgeting BudgetingEssay
BSAD 497Case analysisCase analysisBusiness planBusiness simulationBusiness plan
MGMT 332EssayEssayBudgetingEssayDiscussion
Supporting




ECON 2110DiscussionDiscussionDiscussionDiscussion
ECON 2120BudgetingDiscussionDiscussionDiscussion
Specialization




MGMT 355Discussion
BSAD 441
MGMT 454Case analysisCase analysisCase analysis
MGMT 452Discussion
MGMT 461DiscussionDiscussionDiscussionPractice exercisesPractice exercises
FINC 471BudgetingBudgetingReporting
BSAD 486DiscussionCase analysisCase analysisCase analysisEssay
ECON 350Case analysisBudgetingBudgetingCase analysisEssay
ECON 370Case analysisBudgetingBudgetingCase analysisEssay






Introductory BasicAdvanced


References

  • Calma, A. (2013). Fixing holes where the rain gets in, problem areas in the development of generic skills in business. Journal of International Education in Business, 35-50.
  • Calma, A. (2017). The long and winding road Problems in desveloping capabilities in an undergraduate commerce degree. International Journal of Educational Management, 418-429.
  • Choundhury, D. (2012). Rubrics as an analytical tool for indian business schools with conceptual model using SEM. International Journal of Innovative Research & Development, 11-24.
  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Rouledge Academics.
  • Cotton, J. (2001). An investigation into the relationship between selected economic variables and diploma and degree uptake. Education + Training, 325-332.
  • D’Angelo, B. (2001). Integrating and assessing information competences in a gateway course. References Service Review, 282-293.
  • Ding, R., & Ma, F. (2013). Assessment of university student web searching competency by a task-based online test. The Electronic Library, 359-375.
  • Ferreras-Garcia, R., Sales-Zaguirre, J., & Enric, S.-L. (2020). Competences in higher education tourism internships. Education + Training, 64-80.
  • Fink, L. (2013). Creating significant learning expernences: an integrated approach to designing college courses. San Francisco: Jossey-Bass.
  • Geithner, C. A., & Pollastro, A. N. (2015). Constucting engaged learning in scientific writing; implementation and assessment of a blended pedagogical approach. Journal of Applied Research in Higher Education, 292-307.
  • Gilbert, D., & Oedekoven, O. O. (2015). Architecting assessment. Meeting and exceeding acbsp standards with a design based approach. ResearchGate.
  • Hallinger, P., & Lu, J. (2013). Learner centered higher education in east asia: assessing the effects on student engagement. International Journal of Educational Management, 594-612.
  • Jarchow, M. E., Formisano, P., Shane, N., & Sayre, M. (2017). Measuring longitudional student performance on student learning outcomes in sustainability education. International Journal of Sustainability in Higher Education, 547-565.
  • Lamon, M., Reeve, R., & Scardamalia, M. (2001). Mapping learning and the growth of knowledge in a knowledge building community. Retrieved from https://ikit.org/posters/2001mapping.pdf
  • Oedekoven, Olin O. (2019). Bridging the gap between perceived and actual quality through voluntary accreditation. 12th International Accreditation Conference (pp. 27-30). New Delhi: Peregrine Global Services.
  • Perruso Brown, C., & Kingsley-Wilson, B. (2010). Assessing organically: turning an assignment into an assessment. References Services Review, 536-556.
  • Reich, A. Z., Collins, G. R., DeFranco, A. L., & Pieper, S. L. (2019). A recommended closed-loop assessment of learning outcomes process for hospitality programs. International Hospitality Review, 53-66.
  • Terry Orr, M., Hollingworth, L., & Beaudin, B. (2020). Performance assessment for school leaders: comparing field trial and impementation results. Journal of Educational Administration, 38-59.