Can a statistical model reliably predict that you will buy the latest Harry Potter book, or add organic brie to your virtual shopping cart this week? What about whether you might become violent in the next 15 years, or your unborn child grow up to be a delinquent?
The growing use of computerized techniques for forecasting what we might buy or do on the basis of how our data matches up to some statistical model would suggest that they are well proven. But a landmark paper recently published in the British Journal of Psychiatry has cast doubt on whether such techniques should be used for making decisions about anything beyond the trivial.
The personalized recommendations and special offers that pop up when you order books or groceries online, and even the specific sequence of questions an insurance call center asks about your claim, are all generated by computerized predictive algorithms derived from analysing patterns, links and associations in large sets of data.
By classifying types of people and their behaviors on this basis, shops try to increase their profits by automatically targeting those of us in their databases that seem most likely to buy certain items. Insurance companies use similar methods to reduce fraud by investigating the claims of those whom the software decides are most likely to be lying.
serious matters
But the British government is adopting such techniques for more serious matters. Software at the Department of Work and Pensions, for instance, is beginning to try to detect fraudsters by analysing the voices of people who ring its call centers -- so if you ask the wrong kind of questions, or perhaps ask the right kind of questions in the wrong way, the software could decide you're not strictly kosher.
The British government's Action Plan on Social Exclusion has risk prediction as its first guiding principle. The idea is to predict life outcomes and trigger early human interventions before things go wrong -- in the case of the Nurse Family Partnership scheme, even before birth. In this scheme, the unborn child of a pregnant mother might be categorized as at high risk of future criminality based on factors such as the mother's age, her poor educational achievements, drug use and her own family background. The mother is then visited regularly at home by a nurse and helped with parenting.
In the criminal justice system too, risk prediction instruments assess the probability of adults and young people re-offending, along with a battery of other actuarial tests for predicting future sexual and violent crime. Such techniques, which are not automated in these cases, also play a central role in evaluations to determine whether a person should be committed indefinitely as a dangerous person with severe personality disorder or whether these people, once committed, are ready for release.
The UK Department of Health has even developed a series of predictive algorithms for scoring those patients with long-term conditions who are at most risk of rehospitalization. The idea is to intervene early to minimize admissions.
The Surveillance Society report from the Information Commissioner's Office outlined worries about predictive social sorting on the grounds that it could amount to discrimination, create new underclasses and that by the totting up of negative indicators from health, school and other records, a predictive model could make its own worst predictions come true.
"For instance, if your parents both have criminal records or you have a bad school attendance record because of poor health, even if you are the best-behaved kid in class, you will find that every teacher is likely to treat you with suspicion," said Jonathan Bamford, assistant information commissioner.
error
Now a team of British and US researchers has flagged up a more fundamental danger with these predictive models. When applied to individuals the margins of error are so high as to render any results meaningless.
The study published in May in the British Journal of Psychiatry by forensic clinical psychologists Stephen Hart and David Cooke and statistician Christine Michie takes as its example two popular actuarial risk assessment tools used to predict violence (STATIC-99 and VRAG).
But the team has studied many other widely used tools such as Risk Matrix-2000 (for predicting sexual offending), and found the same high margins of error across the board.
For groups of people flagged up as high risk by Risk Matrix-2000, for example, the standard estimate is that 36 percent will reoffend sexually in the next 15 years. The team found that the true value of that estimate for a group lies between 28 percent and 45 percent, 95 percent of the time -- ie, the 95 percent confidence interval. For an individual, they found the true value of the estimate was between 3 percent and 91 percent, 95 percent of the time. For STATIC-99, VRAG and other tools, the results were much the same.
"The statistical issue of the difference between precision of estimates in a group against that for an individual is not peculiar to psychology and it is not because psychological variables are less reliable. It's to do with inherent variability in human beings," said Cooke, professor of forensic clinical psychology at the Douglas Inch Centre and Glasgow Caledonian University, Scotland.
Cooke has also looked at medical literature on predicting the probability of heart attacks, cancer and other conditions that rely on physical measurements and found the same large error margins.
No one can argue with statistically based procedures for making complex decisions under conditions of uncertainty, so long as successes and failures are aggregated across cases and the cost of errors is low, said Stephen Hart, a professor in the department of psychology at Simon Fraser University in Canada and a leading authority on assessing the risk of violent offending.
"A life insurance company doesn't care whether it makes a `mistake' estimating the lifespan of a given individual -- it could be wrong about every single person it insures -- but as long as the pattern of life spans is predictable on average, then it can still make good money," Hart said.
But if one is interested in individual cases or if the cost of decision errors is high, then these techniques are problematic.
"Families of victims who are killed by patients and offenders released improperly, and those whose civil rights are infringed when they are held improperly, are not satisfied knowing that despite the errors I made in their cases, I am still right more often than I am wrong," said Hart, who has considerable experience as an expert witness in US courts defending people who have been incarcerated on the basis of actuarial risk predictions.
Hart highlighted a further problem with predictive assessments that compounds the effects of these error margins. In cases of uncertainty, humans will tend to anchor on the first substantial piece of information they get and any new information that contradicts this initial idea is given less attention than it merits. This is the theory of anchoring bias.
If for example a predictive model says that a frail 85-year-old man with heart problems is a high-risk sex-offender and 52 percent likely to reoffend over the next 15 years, anchoring bias means that if the assessor is told the score is wrong, they will simply adjust it to 50 percent or 48 percent.
"It is probably zero. But the 52 percent poisons the assessor's judgment," Hart said.
Predictive models are attractive because they represent an apparently scientific and rigorous yet simple approach to targeting resources and making decisions about complex human problems. But this latest study adds to concerns that such techniques are insufficiently accurate to make important decisions about individuals.
Because much of what former US president Donald Trump says is unhinged and histrionic, it is tempting to dismiss all of it as bunk. Yet the potential future president has a populist knack for sounding alarums that resonate with the zeitgeist — for example, with growing anxiety about World War III and nuclear Armageddon. “We’re a failing nation,” Trump ranted during his US presidential debate against US Vice President Kamala Harris in one particularly meandering answer (the one that also recycled urban myths about immigrants eating cats). “And what, what’s going on here, you’re going to end up in World War
Earlier this month in Newsweek, President William Lai (賴清德) challenged the People’s Republic of China (PRC) to retake the territories lost to Russia in the 19th century rather than invade Taiwan. He stated: “If it is for the sake of territorial integrity, why doesn’t [the PRC] take back the lands occupied by Russia that were signed over in the treaty of Aigun?” This was a brilliant political move to finally state openly what many Chinese in both China and Taiwan have long been thinking about the lost territories in the Russian far east: The Russian far east should be “theirs.” Granted, Lai issued
On Tuesday, President William Lai (賴清德) met with a delegation from the Hoover Institution, a think tank based at Stanford University in California, to discuss strengthening US-Taiwan relations and enhancing peace and stability in the region. The delegation was led by James Ellis Jr, co-chair of the institution’s Taiwan in the Indo-Pacific Region project and former commander of the US Strategic Command. It also included former Australian minister for foreign affairs Marise Payne, influential US academics and other former policymakers. Think tank diplomacy is an important component of Taiwan’s efforts to maintain high-level dialogue with other nations with which it does
On Sept. 2, Elbridge Colby, former deputy assistant secretary of defense for strategy and force development, wrote an article for the Wall Street Journal called “The US and Taiwan Must Change Course” that defends his position that the US and Taiwan are not doing enough to deter the People’s Republic of China (PRC) from taking Taiwan. Colby is correct, of course: the US and Taiwan need to do a lot more or the PRC will invade Taiwan like Russia did against Ukraine. The US and Taiwan have failed to prepare properly to deter war. The blame must fall on politicians and policymakers