CHANGE PROJECT
AI-Enabled Quality Assurance Dashboard
for Danang University of Medical Technology and Pharmacy
1. Project Title
“Leveraging
Artificial Intelligence to Develop a Quality Assurance Monitoring Dashboard at
Danang University of Medical Technology and Pharmacy.”
2. Background and Rationale
Over
the past years, Danang University of Medical Technology and Pharmacy has
strengthened its Quality Assurance (QA) system to comply with national and
regional accreditation standards and to advance continuous improvement. Despite
these efforts, several systemic challenges persist:
(1)
Fragmented and manual data management: Survey data, student learning
outcomes, and accreditation evidence are stored across multiple platforms. Data
compilation—primarily via Excel and Word—is labour-intensive and susceptible to
human error.
(2)
Delayed analysis and decision-making: Periodic reporting
limits access to real-time data, reducing the institution’s ability to identify
emerging issues and implement timely interventions.
(3)
Lack of comprehensive data visualization and early-warning mechanisms:
The absence of an institutional KPI Dashboard restricts leadership’s ability to
gain a holistic view of academic quality during governance meetings.
(4)
Limited transparency and information accessibility: Faculty members and
students have insufficient access to QA data, limiting engagement in quality
enhancement processes.
Opportunity
for innovation: The rapid advancement of Artificial
Intelligence (AI) offers considerable potential for modernizing QA practices.
AI can automate survey analytics, extract insights from qualitative feedback,
visualize performance indicators, and issue early-warning alerts. These
functions contribute to enhanced transparency and strengthen evidence-based
decision-making across academic units.
Consequently,
the project “Leveraging AI to Develop a QA Monitoring Dashboard” aims to
directly address current limitations while supporting the institution’s
transition toward a data-driven quality culture.
3. Project Objectives
3.1. General Objective
To design and implement an AI-enhanced Quality Assurance Dashboard that
provides real-time data, supports informed decision-making, and promotes
transparency across the university.
3.2. Specific Objectives
(1)
Integrate AI tools to analyse student survey data and learning outcomes,
reducing reporting time by ≥50%; complete the pilot analytical module by February
2026.
(2)
Develop a QA Dashboard featuring at least 10 automatically updated KPIs;
conduct internal testing by March 2026.
(3)
Promote data-informed decision-making, with ≥80% QA staff and ≥50% faculty
administrators utilizing the Dashboard by April 2026.
(4)
Enhance transparency and stakeholder engagement: at least 70% of faculty
members and 40% of students gain access to QA data; satisfaction levels
increase by ≥20%.
4. Scope of Implementation
-
Scale: A pilot phase will be implemented in 1–2 academic
faculties, followed by institution-wide deployment.
-
Stakeholder Groups:
+
University Leadership: Gains access to integrated visual
analytics to support strategic governance.
+
Quality Assurance Office/Center: Reduces workload in data consolidation
and reporting; enhances analytical capacity; strengthens quality enhancement
processes.
+
Academic Departments and Faculty Members: Access course-level student
feedback and learning outcome data to support timely pedagogical improvement.
+
Students: Benefit from increased access to synthesized, transparent QA
information, fostering trust and engagement.
5. Key Activities (PDCA Framework)
|
Phase
|
Timeline
|
Core Activities
|
|
PLAN
|
Nov–Dec 2025
|
Conduct QA data mapping; assess stakeholder needs; define institutional KPIs;
review IT infrastructure readiness.
|
|
DO
|
Jan–Feb 2026
|
Design AI models (ML, NLP); standardize and integrate datasets; develop Dashboard architecture.
|
|
CHECK
|
Mar 2026
|
Build prototype Dashboard; integrate data from pilot faculties; conduct pilot testing and collect
evaluative feedback.
|
|
ACT
|
Apr 2026
|
Optimize based on feedback; finalize implementation report; scale to institution-wide deployment;
propose mechanisms for sustainability.
|
6. Expected Results
6.1. Outputs
(1)
AI-enhanced QA Dashboard with functionalities including:
-
Monitoring student satisfaction using survey analytics and NLP-based text
analysis.
- Tracking student learning outcomes (pass rates, on-time graduation rates) with
disaggregation by program, faculty, and course.
-
Real-time monitoring of program accreditation and self-assessment progress.
-
Visualization of compliance with Course Learning Outcomes (CLOs) and Program
Learning Outcomes (PLOs), supporting curriculum review and improvement.
(2)
Automated analytical reports, including:
-
Reports synthesized from student surveys, faculty evaluations, and academic
performance data.
-
Trend analysis and predictive risk alerts (e.g., sudden increases in course
failure rates).
-
Early-warning notifications sent to leadership and QA units when indicators
exceed threshold values.
(3)
Comprehensive user documentation, detailing data entry protocols, report
interpretation, user authorization levels, and system maintenance procedures.
6.2. Outcomes
-
Enhanced institutional transparency: Leadership,
faculty, and students gain access to objective and timely QA data, fostering
shared responsibility for quality enhancement.
-
Improved decision-making capacity: Real-time data visualization enables
timely adjustments to curriculum, teaching practices, and strategic planning.
-
Strengthened data-driven QA culture: The institution transitions from
manual reporting to AI-supported smart governance.
-
Scalability and transferability: The Dashboard model can be expanded to
additional units and adapted by other higher education institutions seeking
similar innovations.
7. Potential Risks and Mitigation
Strategies
|
Risk/Barrier
|
Impact
|
Likelihood
|
Mitigation Strategy
|
|
Limited AI capacity
|
Delays and suboptimal analytical performance
|
Medium
|
Engage external mentors; collaborate with IT units; provide short-term internal capacity-building.
|
|
Fragmented or inconsistent data
|
Integration challenges and inaccurate analysis
|
High
|
Standardize data structures; conduct pilot implementation; establish institutional data governance protocols.
|
|
Resistance to change
|
Low adoption rate
|
Medium
|
Implement communication strategies; provide training workshops; adopt phased implementation.
|
|
Sustainability concerns
|
System underutilization
|
Medium
|
Establish a QA–AI taskforce; secure annual funding; collaborate with the InnoAIQA-VN network.
|
8. Support Required from the
InnoAIQA-VN Program
8.1. Technical Support
-
Guidance on AI tools for analysing student surveys, learning outcomes, and QA
evidence.
-
Consultation on selecting and integrating appropriate data visualization
platforms aligned with institutional IT capacity.
8.2. Professional Support
-
Continuous mentorship throughout the design, implementation, piloting,
and scaling phases.
-
Advice on ethical standards related to AI in higher education, particularly
concerning data privacy, confidentiality, and governance.
8.3. Networking and Knowledge Sharing
-
Connections with national and international experts in AI and QA to
exchange best practices.
-
Participation in the program’s professional learning network to share progress,
obtain feedback, and access emerging practices in AI-enabled QA.
- Support for disseminating project
outcomes to other higher education institutions, especially those with limited
technological resources.