-
-
-
Define clear objectives and goals, based on user needs
Establish a performance monitoring framework: Use a performance monitoring framework to understand the digital platform’s real-world impact and how users interact with digital services. The framework should be established from an end-user perspective, not from the perspective of an agency’s infrastructure. Use clear objectives and goals framed in the context of what users need and expect from the digital service.
Off -
Choose relevant metrics that align with organisational goals, meet Digital Performance Standard criteria and capture the user experience
Key performance indicators: Apply measures to achieve the outcome as set out in the Digital Performance Standard and to support your organisational goals. They should be specific and measurable and further your agency’s understanding of how users interact with your agency on digital platforms. Metrics need to be meaningful to understand and improve user experience. Meaningful metrics are crucial to the overall success of the framework.
Apply a best-practice approach: Implement a performance monitoring approach that is comprehensive and focuses on the end-user experience. Where best practice cannot be achieved or does not line up with your agency’s other metrics, strive to introduce best practice concepts over time.
Off -
Articulate how you will implement the monitoring framework
Leverage analytical tools: Reliable digital analytics tools may need to be implemented to collect and analyse performance data. When designing a framework, consider what data sources you require for successful implementation and consider what can be readily deployed within your ICT environment.
Off -
Develop processes for continuous improvement based on insights
Continuous improvement of the user experience: Integrate processes for continuous improvement with a focus on user-centric benefits. Data and feedback should be regularly analysed to find improvement opportunities to enhance overall user experience.
Use a baseline to measure performance: Establish a baseline for your digital service performance with data gathered from your digital service. A baseline can identify areas to improve a digital service in line with user expectations.
Share insights and learnings: Share your insights and learnings with the DTA and other agencies. A collaborative approach to digital experience will support whole-of-government standardisation of digital services, build digital and ICT capabilities and deliver a consistent customer experience. The DTA will support agencies by incorporating insights and best practices in its guidance documents and toolkit.
Off -
-
-
Guidance to implement a monitoring framework
-
Define clear objectives
To develop your monitoring framework, start by setting clear objectives and goals for your digital service based on user needs:
- Conduct user research to identify the needs, expectations and goals of the users of the digital service.
- Use service design methods to further interpret user research and document the user perspective.
- Define the desired outcomes and impacts of your service for your users based on the user perspective.
-
Choose relevant metrics
Choose metrics that align with organisational goals and meet Performance Standard criteria:
- Consider the user experience, service benefits and individual agency objectives.
- Create key performance indicators (KPIs) that are meaningful for your service and your agency.
- Make sure the KPIs can be monitored over time to enable continuous improvements.
- Check your KPIs align with criteria 2, 3 and 4 of the Performance Standard.
- Choose metrics to measure against your KPIs — for example, user satisfaction, completion rates and service availability.
-
Choose monitoring tools and methods
Choose appropriate monitoring tools and methods to collect, store, and analyse performance data:
- Research different tools and methods that can capture, store, and analyse data related to your KPIs, referring to guidance for criteria 2, 3 and 4.
- See what tools and methods other agencies use to understand best practices.
- Collate your research and create a shortlist of tools and methods.
- Make sure tools and methods meet the legal and ethical requirements for data collection and analysis.
- Document your choices and rationale to make an informed decision, for example by conducting a cost-benefit analysis, considering factors such as:
- alignment with your chosen performance indicators
- ease of use
- adaptability, compatibility and ease of integration
- ability to produce comprehensive reports
- scalability
- cost
- security and reliability.
-
Choose appropriate monitoring tools and methods to collect, store, and analyse performance data: • Research different tools and methods that can capture, store, and analyse data related to your KPIs, referring to guidance for criteria 2, 3 and 4. • See wh
Articulate the monitoring framework in a document that covers data collection reporting:
- Include information such as data sources, methods, frequency, formats, and the roles and responsibilities of staff and teams.
- Establishing clear rules and maintaining thorough documentation to detail quality assurance mechanisms will provide clarity for staff involved in data processing.
- Data validation (the process of ensuring data is checked for errors and anomalies), data verification (ensuring data is accurate and consistent against source or reference points) and data cleansing (identifying and correcting inaccurate data) procedures will ensure the collection and use of high-quality data.
- Plan how data and insights will be presented and communicated, for example a dashboard, spreadsheet or a report.
- Explain the process for reviewing and acting on the performance data, include the key decision makers, responsible persons and key stakeholders.
- Review and update the document regularly based on the effectiveness of the monitoring framework, new information and changing needs or priorities.
-
Implementing the monitoring framework
Articulate the monitoring framework in a document that covers data collection reporting:
- Include information such as data sources, methods, frequency, formats, and the roles and responsibilities of staff and teams.
- Establishing clear rules and maintaining thorough documentation to detail quality assurance mechanisms will provide clarity for staff involved in data processing.
- Data validation (the process of ensuring data is checked for errors and anomalies), data verification (ensuring data is accurate and consistent against source or reference points) and data cleansing (identifying and correcting inaccurate data) procedures will ensure the collection and use of high-quality data.
- Plan how data and insights will be presented and communicated, for example a dashboard, spreadsheet or a report.
- Explain the process for reviewing and acting on the performance data, include the key decision makers, responsible persons and key stakeholders.
- Review and update the document regularly based on the effectiveness of the monitoring framework, new information and changing needs or priorities.
-
Set achievable targets or benchmarks
Benchmarks are reference points to compare your performance with other services, agencies and sectors. They help you set realistic and achievable targets or benchmarks for each KPI based on user expectations and best practices.
Setting benchmarks and targets:
- Review the benchmarks and targets in similar services within your agency, government, or the private sector.
- Use the benchmarks to inform the targets for each KPI. For example, you could examine the average task time or drop-out rate for completing a similar task in different services — these targets should reflect user expectations, best practices and the strategic goals of your agency.
- Check your targets and benchmarks are specific, measurable, attainable, relevant and time bound (SMART). Targets and benchmarks should reflect the available resources of your service.
- Document the choices and rationale for setting these targets or benchmarks, and communicate them to relevant stakeholders, senior management or other agencies.
-
Develop processes for continuous improvement
Develop processes for sharing insights and continuously improving the digital service:
- Make sure the people responsible for performance monitoring work closely with those delivering the digital service.
- Frequently assess your service performance data and metrics and insights in comparison to your benchmarks or targets, pinpointing areas where you excel and where improvements are needed.
- Use data-driven insights to inform decisions and actions for enhancing the quality, efficiency and effectiveness of your service.
- Implement agile and iterative methods for testing, experimenting and deploying changes to your service.
- Use prototypes, minimum viable products, or beta versions to validate assumptions and hypotheses and measure the changes in your performance data.
- Document findings, learnings and best practices from your service improvement processes. Share these with your team, senior management, other agencies or the public.
- Foster a culture of learning and innovation in your organisation and encourage collaboration and knowledge sharing across teams and agencies.
- Seek feedback and input from your peers, mentors, experts, or external partners.
- Participate in communities of practice, forums, or events related to your service domain.
Combining user research efforts across the DX Policy and its standards can help to reduce duplication and the cost of research.
Off -
-
-
Identify the most appropriate measure to monitor availability
Fit for purpose: Understand if your monitoring methods for digital service availability (if they exist) are fit for purpose before considering new tools.
Prioritise user-centric metrics: Align metrics with user expectations and preferences to create seamless digital experiences. Reflect on diverse user journeys and consider different entry points, navigation paths and transaction types.
Off -
Monitor the availability of the digital service based on the expected user outcomes
Measure from the end-user’s perspective: Make sure the digital services are available by monitoring them from an inside and outside perspective. Implement tools that monitor uptime to make sure the system remains online. To catch any issues that internal checks might miss, consider other tools that simulate real-world experiences from a user perspective. Comprehensive monitoring will allow agencies to understand and improve the experience of the end-user.
Off -
Act to improve user outcomes
Maintain a reliable service: Make sure your digital service is available, stable and consistent for users no matter their location. Schedule downtime and maintenance when it will cause the least disruption for users and notify users well ahead of time that digital services will be impacted or unavailable.
Create response plans: Make sure clear communication channels are included in response plans. This will allow your agency to proactively address issues and act quickly to maintain availability of the service.
Off -
-
-
Guidance to measure the availability of your digital service
-
Identify appropriate measures to monitor availability
Service availability is how often your service is available and accessible. To monitor availability:
- Define what availability means for your service based on user expectations. For instance, this could mean a digital service that's always accessible, has functional links and works on mobile devices in areas with unreliable internet speeds. See also Define clear objectives.
- Identify the key performance indicators (KPIs) that reflect the availability of your service. At a minimum you should monitor uptime, however other KPIs might include error rate, load time, fully loaded time or percentage of valid links. See also: Choose relevant metrics.
- Choose tools and methods that enable you to collect, store and analyse the data related to your KPIs. These tools should be capable of providing real-time insights and generating comprehensive reports. See also: Choose monitoring tools and methods.
- Make sure the tools integrate seamlessly with your existing systems to facilitate smooth data flow and accessibility for your team.
-
Set availability benchmarks or targets
Set benchmarks or targets to evaluate service availability performance:
- You might specify the minimum acceptable level of availability, the maximum tolerable level of planned and unplanned downtime, or the optimal range of load time for your service. See also Define clear objectives and Choose monitoring tools and methods.
- Establish a consistent schedule and procedure for tracking availability data and presenting the findings.
- Examine the findings to pinpoint gaps, trends, patterns, or anomalies that highlight the service's performance and its impact on user outcomes.
-
Act to improve user outcomes
Act on the findings and implement improvements or changes to the service design, delivery, or maintenance as needed.
Actions may include:
- fixing bugs
- upgrading servers
- enhancing features
- communicating with users.
Connect with the digital community
Share, build or learn digital experience and skills with training and events, and collaborate with peers across government.