The most senior civil servant in the Department for Work and Pensions (DWP) has told MPs that he hopes DWP’s use of artificial intelligence in detecting fraud among benefit claimants will not lead to a repeat of the Post Office Horizon scandal.
Peter Schofield was being questioned* by the Commons work and pensions committee yesterday (Wednesday) on his department’s decision to spend £70 million on so-called machine learning** in the three years to 2024-25.
DWP’s annual report and accounts revealed last year that it was using machine learning to prioritise which universal credit claims to review for potential fraud.
But the National Audit Office reported in that document that using machine learning in this way creates “an inherent risk that the algorithms are biased towards selecting claims for review from certain vulnerable people or groups with protected characteristics”.
The disabled people’s organisation Greater Manchester Coalition of Disabled People is continuing to work with the tech justice campaign group Foxglove over concerns that the algorithm could be “over-picking” disabled people for its benefit fraud investigations.
Schofield (pictured) told the work and pensions committee yesterday that machine learning “helps us to target our resources, helps us to target our people, to investigate more effectively those sort of cases where there’s most likely to be fraud”.
He said there was always a human being who made the final decision on whether to launch a fraud investigation, and that the system also “enables us to identify those people who might be vulnerable customers”.
But asked by Conservative MP Sir Desmond Swayne whether there were “shades of Horizon” over its use of such technology, Schofield said: “I really hope not.”
His response came in the week that an ITV drama based on the Horizon scandal, which saw more than 700 sub-postmasters wrongly charged with fraud and theft due to faulty software provided by Japanese multinational Fujitsu, pressured the government into rushing out plans to exonerate those wrongfully convicted.
The SNP’s Peter Grant, a member of the public accounts committee, pointed out that if the algorithm used by DWP to detect possible cases of fraud has any “unintended inclination towards bias” around particular groups of claimants, that would put DWP in breach of the law.
Neil Couling, the DWP director-general responsible for universal credit, told Grant: “The systems do have biases in, it’s whether they are biases that are not allowed in the law.”
He said the machine learning systems need to have “bias” so the department can “catch fraudsters”, but he said that DWP checks for “unintended bias” at three separate stages.
He said DWP would report in this year’s annual report and accounts whether there were “particular groups with different protected characteristics impacted unintentionally by this kind of activity”.
But he admitted that other countries had “got themselves into quite a bit of a pickle” when they have “tried to use this sort of technique”.
He said: “I’m determined that in the UK, we don’t do that, so as I said to the public accounts committee back in September, we are taking this very carefully.”
*Watch from 10.43am onwards
**A type of artificial intelligence that provides computers with a set of instructions to process large quantities of historical data and identify patterns in that data
A note from the editor:
Please consider making a voluntary financial contribution to support the work of DNS and allow it to continue producing independent, carefully-researched news stories that focus on the lives and rights of disabled people and their user-led organisations.
Please do not contribute if you cannot afford to do so, and please note that DNS is not a charity. It is run and owned by disabled journalist John Pring and has been from its launch in April 2009.
Thank you for anything you can do to support the work of DNS…