Digital future of local services

I was on the front page of the Guardian once. I’d done a blog on AI for MHCLG, the Guardian quoted the blog and would normally write that ‘A spokesperson said…’ but in this case included my name as a civil servant and led to quite a squeaky moment in the morning when I got into the office on Marsham Street. Now, eight years on, the controversy has waned, the public are used to these concepts, and we’re using learning algorithms and predictive analytics in earnest to improve residents’ lives.

In this blog, I want to talk about the practical building blocks, and what happens when we design services and digital in tandem. That’s actually part of the problem — sometimes we design a service and then add on a digital solution, other times we come up with brilliant digital ideas but no practical application. Focusing on both frontline service delivery and digital tech, at the same time, is where the magic lies.

Benefits

So what can digital do for outcomes and efficiency in local areas? Ultimately it’s about perfect knowledge — if we knew perfectly about a family’s or older resident’s needs, and local resources and services, what would we do differently? How would we provide the right support, at the right time, in the right place? How would we change pathways and access? How would we integrate and connect differently?

‘Perfect knowledge’ is probably going a bit far. But we are now seeing that if we have better knowledge of needs, resources, services — we can shift demand earlier and to more cost effective places — reducing the cost of delivery. And also improve the experience of services (think pro-active, positive, compassionate) and therefore repair relationships between residents and the public sector — potentially rebuilding lost political capital at local and national levels.

The data

Improving and linking data is the practical starting point. Build this first — not because it’s technically difficult — but because getting the right data together requires a significant local cultural shift across multiple sovereign organisations who all have their own interpretation of the law and what’s in their and users’ best interests and this takes time and a coordinated effort to shift.

Legislation is improving, for example the Children’s Wellbeing and Schools Bill will require organisations to share data, which goes beyond the previous legal mechanism of enabling sharing. The Care Act, Children’s Act, Digital Economy Act (which oddly excludes health) all help to create a matrix of powers to reference in information sharing agreements. Of course, the law is never black and white, and each data governance lead will want to revisit agreements, so keep working on the local shared culture. Increased sharing should be accompanied by increased transparency and clarity for residents about how their data is used to improve outcomes. We should also consider ethics boards and engagement with residents to test approaches.

The tech has changed. Over the last decade we’ve moved from one big database to the concept of data lakes, i.e. linked data sets in different formats which minimise the requirement to restructure data. (In the next decade, there’s likely to be a similar shift to incorporating unstructured data from emails, case notes, user feedback — but we’re getting ahead of ourselves.)

For data sets in partner organisations an overnight feed of changes works well, with one partner safely receiving the full data and matching records, retaining only what’s relevant to retain for service delivery, in line with information sharing agreements. Note that all data needs to be identifiable. We can put in place cost-effective interventions for individuals and families, but not so much for wards or neighbourhoods.

Ethics

The ethics of how we use residents’ data is changing all the time as public opinion shifts, industry norms develop, legislation changes, and new technology is deployed. Public engagement and ethics boards are important tools, to test how public services plan to use data and govern whether the anticipated impact is justified.

Algorithms can be biased because the data we collect is skewed to particular needs. There are methods to test and reduce bias, such as changing gender or ethnicity, running the model again and excluding unbalanced data.

An important safeguard is how we use the data to inform action. Decision making in services is varied, it might be considered reasonable for AI to be used to offer help or information, but not to make a safeguarding decision. It’s critical to have a professional in the loop for all significant decisions.

Also, to ensure professionals are comfortable with the information they receive, there should be options for that professional to feed back and influence how predictive data is presented.

Possibly linked to a deficit-bias, most local data describes needs and problems, we’re less good at describing protective factors which reduce the impact of needs. This is where unstructured data might come to the rescue, such as case notes describing the community around an older person through three conversations practice, or that a child has a particularly strong relationship with her form teacher.

In time, the data lake should cover all relevant partner information. For example, housing, social care, police activity such as domestic abuse, community safety, public health nursing, debt recovery, revenue and benefits, probation, mental health, physical health, GP engagement, education records, nurseries and voluntary sector data, local community resources and hubs, and information supplied by residents.

Predicting needs

In the last decade we’ve become much better at understanding and predicting local needs. Huge data sets mean there is no chance that we can manually work out the relationships between data points across a whole population, it’s simply too complicated. That’s where learning algorithms or artificial intelligence starts to be useful.

There is still a degree of manual input to correctly arrange data extracted from the data lake, to test what’s having the biggest impact on intended predictions, and identify which type of AI algorithm might work best. But we are now at the point where the outputs are useful, can be practically deployed to direct local services and resources, can improve both outcomes and efficiency.

Children not in education, employment or training

One example is predicting which pupils in school are at risk of being NEET in the future. If we know which child is at risk then more education support can be given from an early age in schools and with the family. Nationally there is already an algorithm used to identify pupils at risk called RONI (risk of NEET index), so ethically it’s accepted that we use an algorithm to offer early help in this space.

Tests on previous years of data showed that it was possible to improve on the RONI algorithm using machine learning — using education and safeguarding data. The information about which pupils are at risk is displayed on an app for schools, identifying children from 12 years old who might benefit from additional support. Also in the app is best practice guidance about the right support.

Potentially thousands of pupils are identified by the predictive analytics. And there is a clear mechanism to deploy teaching staff to provide a little additional help for these pupils, to reduce their risk of being NEET. Whilst the model is still being tested, it gives an idea of how we might practically develop digital tech at the same time as service delivery.

So once we have the information arranged, how does predictive analytics work? Effectively we feed in perhaps 30 promising datasets and the AI identifies which residents are at risk of developing a specific need. When we test this against previous years of real data, we can assess the efficacy of the AI model in four boxes:

  • True positives — these are the people identified correctly by the algorithm of developing the specific need.
  • True negatives — again, people identified correctly as not developing that need during the time-frame.
  • False positives — sometimes the algorithm will get it wrong and incorrectly identify people as needing help when they don’t.
  • False negatives — or not needing developing a specific need when in reality they do.

What we want is high true positives and true negatives and low false positives and false negatives. Different AI techniques and algorithms will give different responses and a significant amount of time is spent tweaking and testing to get the optimum efficacy. What we’ve found is that AI works best when populations are balanced and the number of positives is similar to the number of negatives. However the reality of predicting falls, children in care, NEETs, educational needs, etc is that we are often looking for the proverbial needle in the haystack. Current learning algorithms are less well suited so we often end up with identifying a lot of false positives — and that’s where service design comes in.

You’ll recall the importance of designing both the digital tech and service in tandem. If, for example, we can identify a small number of true positives but there are very few false positives, we might deploy a more expensive service intervention. For example, where children are at risk of coming into care, we might provide a list of families to social care or an edge of care service for intensive support, which nonetheless costs a lot less than the average £100k+ each year for a child that does come into care.

Where we have a lot of false positives but also a lot of true positives identifying children at risk of coming into care, we might ask teaching or community sector workers to look out for needs or do a bit more with families (e.g. the NEETs model above).

And where we have a list of perhaps 10,000 families that are at risk of crisis, we can deploy an automated early help offer, to link those families up to local community resources, digital and other support. Still cost-effective but utilised at a much greater scale.

So in the end, the service model will change, based on the efficacy of our predictive analytics models. And due to the time it can take to redesign services, we really need to start now, in anticipation of predictive analytics being used to deploy services and other resources.

The service

I’ve mentioned already the necessity of designing services in tandem with designing the data technology. The big shift will be how we use more perfect knowledge to deploy resources. Take two families with the same needs, let’s call them Family Zeta and Family Alpha (generation Z and Alpha). A current Gen Z family might have a range of needs such as a child with SEND, or a history of domestic abuse and safeguarding concerns, or adult with mental health needs, or non-payment of a debt, or risk of homelessness. Whichever need is identified first by a local authority or partners, often dictates what response is received by the family, with totally different services being offered. Other needs are left unaddressed until they reach an expensive threshold and escalate into another isolated intervention. Family Zeta bounces around the system, costs a significant amount of money, often with a poor experience and poor long-term outcomes.

How might this change in the future if we had more perfect knowledge of needs, protective factors, resources and services?

Family Alpha has the same needs but in our future local public sector system. When the family comes into contact with a service, our data is integrated, and either the professional or more likely an AI bot is able to identify additional needs and low-cost early help interventions. That might include bringing in housing to prevent homelessness, linking the family to a local community group to support mental health, offering help to manage debts with Citizens Advice, ensuring additional support in the nursery for educational needs, etc. Whatever route Family Alpha takes, we hold the same data and they are met with the same assessment and pathway to early help that reduces the likelihood of their needs escalating.

A few more observations. Professionals of the future will be deployed differently, with a mix of cases due to residents meeting thresholds of acute need, and other cases assigned because algorithms predict that an individual’s needs will escalate in the future. The shift to this mix will be slowed by available capacity, but is an inevitable change if we want to move to a pro-active demand management model.

We will develop better data sharing for professionals and partner organisations. Case management systems will be shared where it makes sense to have the same workflow. And we will create apps for partners that safely share residents’ needs and the services they are receiving, accessed by partners including the voluntary sector, schools, etc. These sharing mechanisms remove the barriers that have typically divided our workforce, and also further the concept of perfect knowledge, and ensuring Family Alpha access the best pathway of early help.

Finally, the way residents receive information and navigate the public sector system will change. There might be 200 advice and guidance websites in a local area, on top of the thousands available nationally and internationally. We are already seeing new AI bots that can take that information and tailor it to the individual resident — so each person is presented with their own webpage designed around their own needs — helping them to navigate the system and get help early.

Productivity

There are also smaller productivity and process gains from better data and AI that can’t be overlooked. Social care is already using tech to automate writing case notes based on conversations with residents, including automating subsequent actions such as letters. Draft plans for pupils are pulled together automatically from education, health and care assessments and data. Adult services is prioritising assessments based on an automated analysis of need. Advice lines are using large language models trained on the local offer and statutory guidance to improve guidance. Chatbots are providing residents with advice, guidance and even referrals. Automated redactions, translations, letter writing, dashboard creation, are all freeing up time for frontline face to face delivery.

The system

It will take time to shift to this new demand management model for a whole local area. There is a role for system leaders to set the vision (although don’t get bogged down by this), and establish the foundations that have a longer-lead time, such as data engineer and scientist capacity, a culture of data sharing, data lake infrastructure, building early help service capacity.

To deploy an automated early help system we will also want a trusting digital relationship with all our residents, particularly those in most need. Family Alpha will think it’s normal for the Council to contact them about a local group or online resources, with advice and guidance, or waiting well support. They will engage with digital services, contribute to assessments or updates. It won’t be all roses, but Family Alpha will often feel like people are compassionately looking out for them and trust the support they receive. The technology underpinning this digital relationship includes client relationship management systems, automated messaging and links to early help predictive analytics which updates based on changing needs and protective factors.

And so, haven’t things moved on, from a blog on AI eight years ago that might have been a P45 moment, to tangible and practical examples of designing digital and services in tandem. And what of the impact?

The impact is more perfect knowledge so residents get the right help, at the right time, in the right place. The impact is compassionate early help for residents and families which reduces acute need, builds relationships and political capital. The impact is better outcomes for residents and better efficiency for local public services.

References

MHCLG, Supporting Families Programme: Predictive Analytics, Richard Selwyn, https://supportingfamilies.blog.gov.uk/2018/05/14/predictive-analytics/

The Guardian UK, Councils use 377,000 people’s data in efforts to predict child abuse, www.theguardian.com/society/2018/sep/16/councils-use-377000-peoples-data-in-efforts-to-predict-child-abuse