Future Service Standard

The Future Service standard sets out 10 points to help us build, deliver and run excellent services and experiences for all whom interact with us.

We believe that the best services are convenient, intuitive and delightful to use. The purpose of the Future  Service Standard is to help us as a team deliver consistently good services online, and when required offline.

In short it will help us:

  • define what good Future services looks like

  • identify the steps we need to take to get there

Any service we want to deliver should meet the below standard before they are pushed to live.

The 10 Future Points

  1. Understand who are users are and their needs

  2. Design the whole experience, end to end, front to back

  3. Make it simple and intuitive

  4. Deliver a consistent user experience

  5. Actively use agile and iterative practices

  6. Give ownership and accountability

  7. Embed privacy and security by design

  8. Use data to drive decisions

  9. Support those who need it

  10. Test from end to end

 


 

1. Understand who our users are and their needs

Continually research to develop a deep knowledge of our  users, their needs and their context for using our service.

 

Why is this important:

We must begin projects by exploring and pinpointing the needs of the people who will use the service and the ways the service will fit into their lives. We must include real people in the design process from the beginning. The needs of these people should inform technical and design decisions. We need to continually test the products we build with real people to keep us honest about what is important.

At its core understanding our users and their needs means:

  • Our products and services will be built on our users’ real needs, not our own assumptions

  • We will deliver better solutions and service experiences

  • We should discover additional opportunities and insights

  • We will prevent wasted effort implementing the right idea in the wrong way

Checklist

  • Use a range of qualitative and quantitative research methods to determine user goals, needs, and behaviors

  • Document the findings about user goals, needs, behaviors, and preferences

  • Create a list of user stories, personas and profiles for the service

  • Share findings with the team and leadership

  • As the service is being built, regularly test it with potential users to ensure it meets their needs

Key Questions

  • Who are your primary users?

  • What user needs will this part of the service address?

  • Why does the user want or need this service?

  • Which people will have the most difficulty with the service?

  • Which research methods were used?

  • What were the key findings?

  • How were the findings documented? Where can future team members access the documentation?

  • How often are we testing with real people?


 

2. Design the whole experience, end to end, front to back

We need to understand the different ways people will interact with our services, including the actions they take online, on a phone, or in person. Every encounter — whether it’s online or offline — should move the user closer towards their goal.

 

Why is this important:

It's important to understand what users motivations are when they access a service and how that service fits within the broader context of their life.

The service experience is much more than a product people interact with on screen. It begins when they first hear about the service and it doesn't end until they've reached the end of their Future journey. It also enocmpasses all of the internal processes and tools used to support that user journey to make the users experience as seamless and delightful as possible.

Checklist

  • Understand the different points at which users will interact with the service – both online and in person

  • Continually identify any pain points in the way users interact with the service, and prioritise these according to user needs

  • Design the digital parts of the service so that they are integrated with the offline touchpoints people use to interact with the service

  • Develop metrics that will measure how well the service is meeting user needs at each step of the service

Key Questions

  • What are the different ways (both online and offline) that people currently accomplish the task the digital service is designed to help with?

  • Where are user pain points in the current way people accomplish the task?

  • Where does this specific project fit into the larger way people currently obtain the service being offered?

  • What metrics will best indicate how well the service is working for its users?

  • What is the best way to respond to this user need? Be driven by need not by tech.



 

3. Make it simple and intuitive

Using our service shouldn’t be stressful, confusing, or daunting. It’s our job to build services that are simple and intuitive enough that users have every opportunity to succeed first time.

 

Why is this important:

It's important to make sure our services are as simple and straightforward as possible. All users, even those who have accessibility needs or lack digital experience, should be able to complete a task/interaction easily.

If a service is complex or unclear, users will be either forced to contact us for help to complete their task. Or they may avoid using it altogether. Not only does this lead to higher operational costs, but it also can lead to user frustration and a loss of confidence in our brand.

Checklist

  • Use a simple and flexible design style guide for the service.

  • Give users clear information about where they are in each step of the process

  • Follow accessibility best practices to ensure all people can use the service

  • Provide users with a way to exit and return later to complete the process

  • Use language that is familiar to the user and easy to understand - Verbs not Nouns

  • Use language and design consistently throughout the service, including online and offline touch points

  • Use analytics and user research to reduce the number of people who didn’t complete the task they set out to do (e.g. Complete 1:1)

Key Questions

  • What primary tasks are the user trying to accomplish?

  • Is the language as plain and universal as possible?

  • If a user needs help while using the service, how do they go about getting it?

  • How does the service’s design visually relate to our other services?

  • How will we use data and analytics to track the success or failure of the task?

 

4. Deliver a consistent User Experience

When the a user interacts with Future, their experience should feel cohesive, positive, consistent and on brand.

 

Why it matters

Users should know when they are part of a Future experience agnostic of platform or channel.

Services delivered online, over the phone or in person should provide a consistent experience for our users. From branding to tone of voice and complaints, users should always feel confident in their ability to complete the service properly and our ability to guide them to the completion of their task.

Delivery a consistent experience  across all Future touch points means:

  • Users trust Future’s  services because they recognise the style

  • You don’t have to build something entirely new so you save time and can focus on unique parts of your service

  • You're using patterns and style which are based on data and user research

Checklist

  • Demonstrate that the service is responsive, with the same content and functionality on all devices, and works on mobile devices

  • Demonstrate how the service has used Future’s design standards and adheres to brand guidelines

  • Apply Future’s content guide to maintain the tone and voice of our content

Questions

  • Is the service designed using Future style guide?

  • Is the language consistent with Future’s tone of voice guide and appropriate for the target user?

  • Is this delivering a Future experience we can be proud of?

 

5. Actively user agile and iterative practices

Build a service that can be iterated and improved on a frequent basis using agile practices to allow us to be more proactive and respond quicker to change, both in technology and business strategy.

 

Why it matters:

Agile is an approach to building services that breaks the work into smaller chunks known as iterations. Build one feature of the service at a time until the entire service is complete.

It is a much lower risk approach than traditional build-it-all-at-once approach (known as waterfall because frequent iterations expose any flaws in the original plan (e.g. not getting approvals, not enough resources etc.) and iterate and improve on them much faster.

Agile methods build services that:

  • Can be prototyped quickly (and shown to users for regular feedback)

  • Meet the needs of users

  • Can change easily

  • Can keep improving based on user feedback

  • Can be built quickly with a minimum set of features, and enhanced with more features after the service goes live

Checklist

  • Ship a functioning “minimum viable product” (MVP) that solves a core user need as soon as possible

  • Run usability tests frequently to see how well the service works and identify improvements that should be made

  • Ensure the individuals building the service communicate closely using techniques and tools to optimise the process such as launch meetings, daily standups, retrospectives and team chat tools

  • Keep delivery teams small and focused; limit organisational layers that separate these teams from the business owners

  • Create and maintain a prioritised list of features, opportunities and tasks

Key Questions

  • How long did it take to ship?

  • How are tasks tracked and tickets issued? What tool is used?

  • How is the Epic backlog managed? What tool is used?

  • How often do you review and re-prioritise the Epic backlog?

  • How do you collect user feedback during development? How is that feedback used to improve the service?

  • At each stage of usability testing, which gaps were identified in addressing user needs?

  • What’s our quality assurance testing and rollback plan that supports frequent iterations to the service

 

 

6. Give ownership & accountability

There must be a single owner who has the authority and accountability for the service.

 

Why is this important:

Giving one person the responsibility to assign tasks and work elements; make business, product, and technical decisions; and be accountable for the success or failure of the overall service, means you get no blurred lines, and clarity in direction and decision. 

The owner is ultimately responsible for how well the service meets needs of its users, which is how a service should be evaluated. The owner is responsible for ensuring that all features are built and as well as managing the backlogs for their service component.

Checklist

  • An owner has been identified

  • All stakeholders agree that the owner has enough knowledge on the services intended use case and users to make informed decisions

  • All stakeholders agree that the owner has the authority to assign tasks and make decisions about features and technical implementation details

  • If the output is digital, the owner has enough technical experience to assess alternatives and weigh up tradeoffs

  • The owner has a work plan that includes budget estimates and team resourcing

Key Questions

  • Who is the owner?

  • What has been put in place  to ensure the owner has sufficient authority over and support for the project?

  • What does it take for the owner to add or remove a feature from the service?


 

7. Embed privacy and security by design

Identify the data the service will use, store or create. Put appropriate legal, privacy and security measures in place so that users feel confident their personal information will be kept secure and privacy respected.

 

Why it matters:

Our digital services have to protect sensitive information and keep systems secure. This is typically a process of continuous review and improvement which should be built into the development and maintenance of the service. At the start of designing a new service or feature, the owner should engage the appropriate people responsible for privacy, security, and legal to discuss the type of information collected, how it should be secured, how long it is kept, and how it may be used and shared to make our users feel secure in using our service.

Users will most likely not use a service unless they have a guarantee:

  • Any information they provide is secure and confidential

  • How their information will be used

  • They can access their information in the service when they need to

  • That their privacy is protected while they use the service, and afterwards

Checklist

  • Determine, in consultation with appropriate person what data is collected and why, how it is used or shared, how it is stored and secured, and how long it is kept

  • Determine, in consultation with appropriate person whether and how users are notified about how personal information is collected and used, including whether a privacy policy is needed and where it should appear, and how users will be notified in the event of a security breach

  • Consider whether the user should be able to access, delete, or remove their information from the service

Key Questions

  • Does the service collect personal information from the user? How is the user notified of this collection?

  • Does it collect more information than necessary? Could the data be used in ways an average user wouldn’t expect?

  • How does a user access, correct, delete, or remove personal information?

  • Will any of the personal information stored in the system be shared with other services, people, or partners?

  • How and how often is the service tested for security vulnerabilities?

  • How can someone from the public report a security issue?

  • Are we compliant?

 

8. Use data to drive decisions

Continuously capture and monitor performance data to analyse the success of the service and in turn translate  and inform the findings into ongoing service improvements.

 

Why it matters:

Every service must aim for continuous improvement. Metrics are an important starting point for discussions about a service’s strengths and weaknesses. At every stage of a project, we should measure how well our service is working for our users. By identifying and capturing the right metrics - with the right tools - we can make sure all our decisions to improve the service are supported by data.

Measuring and collecting performance data  means continuously improving a service by:

  • learning its strengths and weaknesses

  • using data to support and implement changes

Checklist

  • We have decided the data we  need to capture, where we need to capture it from and how we’ll capture it

  • We have assigned someone in the team responsible for identifying actionable data insights

  • We have created a performance framework outlining our objectives and what metrics the team will use to demonstrate we meet them

  • Show we’ve used qualitative and quantitative data to help improve our understanding of user needs and identify areas for improvement

Key Questions

  • What are the key metrics for the service?

  • Do we have historical data to compare against?

  • Which system monitoring tools are in place?

  • Which tools are in place to measure user behavior?

  • What tools or technologies are used for A/B testing?

  • How do you measure customer satisfaction?

  • How will we share our insights and actions with internally


 

9. Support those who need it

Ensure the service is accessible to all users regardless of their ability and environment, making sure processes are in place to support people who need it.

 

Why it matters:

Not everyone will have the same access, comfort and skill level to use digital services by default. Understand how and where users require support, make that support available, and raise awareness of that support.

This is critical to building tools that work for everyone and avoiding inequalities through our services.

Checklist

  • Have identified the type of environments users may access the service in, including with different browsers and desktop and mobile devices, and when connections are slower and there may be limited data; for example, through user stories

  • User research has covered all personas and users of the service, including people from different cultural backgrounds and people with disability

  • Situational and environmental limitations that affect a user’s ability to access the product have been considered

  • What digital assistance might be needed to support users; for example web chat, telephone assistance, face-to-face, clear instructions, checklists, and so on has been considered

Questions

  • Which browsers and devices are we supporting, and how they are accommodated?

  • Are there any barriers to the digital service and its content on mobile devices, and do we have plans to address them?

  • Have we understood our target users’ digital skill, confidence and access needs?

  • Have we determined why some users can’t use the digital service independently, for example internet barriers?

  • Have we identified our user needs for support


 

10. Test from end to end

Testing the end-to-end service allows us to find problems and check that the service will work for our users before it goes live, reducing the chance of errors and serving up a bad user experience.

 

Why it matters:

We cannot wait until the service is live to discover problems that stop people from using the service. We need to rigorously and comprehensively test every part of the service during development to be consistently delivering a great experience.

Checklist

  • We are designing and testing our service to work with the devices and browsers our users use

  • We are are testing our service in an environment that’s as similar to live as possible

  • We understand the systems we need and the testing environments for non-digital parts of the service

  • We are testing our service frequently

  • We have separate content, design and functionality so updates can be made independently of each other

  • We have a process for testing changes made to the service

  • We have a process for monitoring and testing the service frequently even when changes are not being made

  • We have a plan for handling failures and notifying users

  • There is clear documentation of how the service was built and how to maintain it and keep the documentation up-to-date

  • We have a plan for data storage and recovery in case of data loss

Key Questions

  • What is our testing process?

  • What test tools are used?

  • How frequently do we test?

  • How do we determine which platforms/browsers/channels to test on?

  • Have you understood and identified the systems you need and the testing environments for non-digital parts of the service

  • Where is documentation stored?

  • What classifies a pass or fail?