Shielding Neoliberalism: “Social Acceptability” to Avoid Social Accountability of A.I.

Paz Peña

Case study: Childhood Alert System (Sistema Alerta Niñez – SAN) – Chile

The Childhood Alert System (SAN) is a software that provides complimentary input to data collected in Chile by the so-called Local Childhood Offices (OLN, in its Spanish acronym, for Oficinas Locales de la Niñez), in order to support decision-making by managers of these OLNs when offering support to children and their families, considering the individual needs of each child and adolescent (NNA, in its Spanish acronym, for Niños, Niñas y Adolescentes), as well as family and environmental factors.

Several requests for public information submitted to the government have made it clear that SAN is one additional tool – and not the main one – available for these managers to prioritize cases identified through on-site alerts, that is, those generated and verified by people who interact directly with children and adolescents (NNA) and their families. However, as Derechos Digitales has pointed out in its report on the system, the AI system was first designed, and then came the institutionality with the creation of the OLNs.

The software is based on the use of Predictive Risk Modeling (PRM). Its purpose is to identify children and adolescents (NNA) who are at risk of violation of rights and the families that require support to enhance their role of providing child protection, based on administrative data, detecting a set of individual, family, environmental and peer conditions that tend to occur when there is a high risk of violation of rights.

The idea is that the tool is run periodically (for example, once a month) through the list of children and adolescents (NNA) in the databases of the Ministry of Social Development and Family (MDSF, in its Spanish acronym, for Ministerio de Desarrollo Social y Familia). The tool scores each individual. The score can then be used to prioritize NNA and families already known to the software, such as those identified by professionals who interact with NNA and their families in the social protection system, case of the workers at Chile Grows With You (ChCC, in its Spanish acronym, for Chile Crece Contigo). In addition, NNA who obtain a higher risk score and who were not yet in contact with other State programs could be proactively contacted. According to the government, prioritization has a preventive purpose, under the responsibility of the Local Childhood Office, which will provide support and prevention programs beneficial to the NNA and their family.

To identify NNA in these circumstances, it was necessary to train the model through the analysis of NNA who had already been subject to violation of their rights, to then study their life course and identify the family conditions experienced by these NNA before the violations occurred. Children and adolescents facing these conditions at present would be those who may – potentially – need support to mitigate the risk factors surrounding them.

Who develops SAN pilot?

The pilot and the related consultancy were developed in partnership between two universities: the Centre for Social Data Analytics (CSDA) at the Auckland University of Technology (AUT Ventures Limited) and the Public Innovation Laboratory (GobLab) at the Universidad Adolfo Ibáñez.

The pilot -that was developed with public funds through a public tender process- was awarded to these two universities after the public announcement made in March 2018 by Chilean President Sebastián Piñera about the Great National Agreement for Children, which included among its measures the creation of an early warning system, called “Childhood Alert”. The director of GobLab invited professor Rhema Vaithianathan, co-director of the Centre for Social Data Analytics, to Chile, and in Santiago, they had a series of meetings with authorities.

SAN audits

According to government information, “after a few months of operation of the Childhood Alert Pilot System, the need to start an algorithmic audit was identified in order to study and evaluate the hypothetical existence of biases, both in the data used for training the model and the characteristics used for the production model”. The results would imply implementing model enhancement. This algorithmic audit was financed by the Inter-American Development Bank (IDB) and performed by the Spanish consultancy Eticas Research Consulting. The Chilean government rejected the possibility of learning about the results of this audit.

International controversies over the system

In this section, we refer specifically to the controversies faced by CSDA and their projects:

  • Allegheny Family Screening Tool (AFST), USA

Allegheny Family Screening Tool (AFST), is a predictive risk-modeling tool designed to assist with child welfare call-screening decisions in Allegheny County, in Pennsylvania, USA. The Centre for Social Data Analytics developed the algorithm.

In her book, “Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor” (2018), Dr. Virginia Eubanks refers to AFST as an example of a system that oversamples households that rely on public assistance programs and, thus, tend to over-surveillance and over-punish poor and working-class families.          

A report delivered to Chile by the Centre for Social Data Analytics analyzes the AFST case and refers to the controversy with Eubanks as follows: “One exception to the general trend of positive reports was a book by Virginia Eubanks (2018). She was concerned that AFST stood for poverty profiling, that is, that it would culminate in more children being removed from their families just because they were poor. The County disputed her claims and posted a point-by-point rebuttal on their website.”

Effectively, after this public controversy with the county, Eubanks declared: “My larger point is not that we must wait to entirely eradicate poverty before we try to improve Children, Youth and Family Services, but that the AFST only has access to data collected on families using public services, not on those that access private resources for parenting support. Because this will result in higher risk scores and more scrutiny of poor and working-class families, I believe the system is unjust and discriminatory.”

  • Vulnerable Children PRM, New Zealand

The initiative was undertaken by the New Zealand government between 2011 and 2015 to develop an algorithm that would allow child protection services to predict future child maltreatment at the point of birth and to pre-empt it before it occurs (Ballantyne, 2019). Two models were created, but neither was implemented. Version 1 was intended for use with families who registered in the public welfare system. Version 2 was for use when a child was born. The project was led by Dr Rhema Vaithianathan, then of the University of Auckland, now at CSDA, in order to build a Predictive Risk Model to be tested retrospectively for children born between 2003 and 2006. The goal was to check the model’s predictions against what had actually happened to the children. It was an observational study purely. No policies changed and no frontline worker would have accessed the prediction scores.

There is no clear information on why system implementation was halted, but apparently, the Minister responsible had ethical reservations that the observational study might identify children at risk but not really address any case (Ballantyne, 2019).

But the discomfort with the project also related to its substance. Anne Tolley, then the Social Development Minister, told Stuff in 2015: “Where it goes from there is another big ethical question. Because God knows, do we really want people with clipboards knocking on people’s doors and saying: ‘Hello, I’m from the Government, I’m here to help because your children are going to end up in prison?’ I just can’t see that happening.” Tolley’s position was made clear by her note on the briefing paper, “Not on my watch. Children are not lab rats.

Controversies over SAN within Chile

As soon as the creation of this system was publicly announced, civil society groups working for children’s rights stated that, in addition to surveillance, the system “implied the imposition of a certain form of sociocultural normativity”, also “socially validating forms of stigmatization, discrimination and even criminalization of cultural diversity existing in Chile”. This particularly affected indigenous peoples, migrants, and low-income people, and ignored that growing cultural diversity “demanded greater sensitivity, visibility, and respect, as well as the inclusion of culturally relevant approaches to public policies” (Sociedad Civil de Chile Defensora de los Derechos Humanos del Niño et al., 2019). In this sense, Francisca Valverde, from the group of organizations Bloque por la Infancia and executive director of Asociación Chilena Pro Naciones Unidas (ACHNU), argues that this type of system stigmatizes poor children and does not emphasize the protection of children and adolescents in different territories, including those of higher social classes.

Analysis

In its public addresses, the Chilean government highlights “Childhood Alert” as a neoliberal targeting mechanism of social programs rather than an artificial intelligence software or system.

The reason for that may be strategic, given the controversy that systems similar to SAN have faced both in the United States and in New Zealand. This makes a lot of sense when government explanatory documents, provided pursuant to transparency legislation, repeatedly emphasize that “Childhood Alert” is one more source of information for Local Childhood Offices, but that the final decision is made by humans. In fact, it should be noted that this last remark is one of the recommendations made by CSDA (New Zealand) and GobLab (Chile), which analyzed the AI systems from the USA and New Zealand referred to above (some of which were developed by part of the CSDA team).

However, it seems difficult to prove with evidence – at this point where it is still a recent model – that the results produced by an AI system specially designed for the occasion are not considered as a neutral fact by social workers. On the one hand, there may be pressures for them to do so, due to the investment made in the system and because the Local Childhood Offices were designed considering this technological tool. Also, because it has not yet been studied how the system interacts with professionals and whether the alleged neutrality of digital technologies ends up influencing final decision-making.

In any case, the risk score generated by “Childhood Alert” seems to serve well the logic of neoliberal public policies that Chile has implemented, especially with regard to children, from the dictatorship of Augusto Pinochet (1973-1990) onwards. On the one hand, the State approach to childhood is permeated by discourses that conceive linear models of individual development, which consider almost exclusively the individual psychic dimension, without inserting it into governmental processes that produce consequences in other equally relevant dimensions of the children; that is, childhood is seen not as a subject capable of affecting the social fabric but as an object that must be protected from the adult world (Schöngut, 2017). This can be seen, for example, in the use of the word “symptom” (usually related to some disease) by the undersecretary of children, when referring to SAN:

“We are implementing a warning system that is called “Childhood Alert”, which will aim to forecast the first symptoms that, for example, may lead a child to drop out of school, have their first contact with drugs or commit a crime for the first time, and thus start understanding directly what is happening to that child, to their family. All of this will be done through information crossing, which will allow us to determine which children are in the alert phase or at risk”.

Vulnerability is associated with the concept of “social disease” (determined by school dropout, drugs and crime) and there are symptoms of this disease that focus not only on the NNA but also on their environment. Artificial Intelligence, then, becomes an automated diagnostic tool that allows “ordering the queue” (another metaphor frequently used by the authorities to refer to AI) to make more efficient the programs offered by the State to prevent social diseases.

In this sense, instead of offering a system focused on social guarantees, the Chilean State automates targeting decisions, which is a classic mechanism of neoliberal policies, especially for children, in Chile (Cubillos, 2019). Thus, SAN would seek, further than scoring children and adolescents, to reduce spending through increased effectiveness of programmatic interventions, offering neoliberal technocratization the foundation for automated decisions rendered by the latest technology. By the way, this instrument has been designed and deployed without an open conversation in the country. In this sense, it is interesting how authorities take advantage of the innovative side of technologies to present as revolutionary an instrument that, on the contrary, perpetuates a policy that in Chile is over 40 years old. The undersecretary of Social Evaluation, Alejandra Candia, said:

“This innovative preventive targeting instrument will allow the Government to change the course of life of hundreds of children at risk of vulnerability, allowing us to reach those who require help in time to avoid complex situations that often end up being irreversible for them and also for their families”

In contrast to the discourse of the government, there is that of the developers of the SAN pilot, carried out in the academic environment by two specialized centers working on data at universities in New Zealand and Chile. This aspect may be the reason for none of the typical language of technology, such as innovation, modernity, and the future, to show their conclusions.

Despite this absence, the developers repeatedly state that AI is a desirable technology: although they make reference to the possibilities for improvement and the need to always check for quality, at no time they question it as technology nor as a valid tool to support public policies. More than responding to criticism over the system, they recommend preparing in advance for such criticism and having answers ready at hand.

In this sense, their reasoning in the conclusions and recommendations focuses on stressing the concept of “service acceptability” by society. On the one hand, this suggests that developers understand that technologies are part of socio-technical systems (“Even the best-designed services in the world might not be successful if families do not adopt them”, in Vaithianathan, 2019). And secondly – and as we indicated earlier – this may be due to the developers’ experience with criticism over the implementation of other systems they have worked on.

The “service acceptability” is not a strictly technological concept, but a communicational one. Basically, it consists of improving communication to present the PRM to society. It is not about introducing an enhancement to the system or not implementing it at all. Rather, it is about taking advantage of the ability of humans to communicationally induce and seduce non-human systems. In other words, “service acceptability” is a human factor that, in a way, shields the machine from providing social accountability.

Thus, for example, many of the pilot developers’ recommendations revolve around training humans to improve communication about the PRM. First, they suggest that presentation of the system to the affected community must be formulated in a “positive” way, based on the deliberate idea of, on the one hand, hiding that what the PRM precisely provides is a classification of social risk:  

“MDSF (Ministry of Social Development and Family) should also consider carefully how these families are contacted. Prepare guidelines for front-line Ministry workers that balance the need for transparency with the desire to have families contacted with a positive formulation, rather than a deficiency-based approach that emphasizes risk factors in the life of the family” (Vaithianathan et al, 2019).

And, on the other hand, it is recommended that the Ministry identify areas where they need for support by the State forces citizens to accept the implementation of the PRM without further questioning:

“It is our recommendation that the MDSF focuses on the pilot with newborns, as this is a time of high need, but also a time when families are open to accepting additional support.” (Vaithianathan et al, 2019).

The “service acceptability” as a shield against the social accountability of the non-human system becomes clear when hypothetical problems of “inequality” of the system are addressed, specifically with regard to “race” and socioeconomic status. On this last aspect, instead of seeing it as a circle of stigmatization of poverty – as has been the criticism of Eubanks (2018) –, they consider its potential to discriminate against richer children and adolescents (although they do not interact with the system). In any case, human shields must bear the costs of software bias: “This indicates that front-line personnel should be more attentive to assessing risk factors among the wealthiest families that have been alerted” (Vaithianathan et al, 2019).

Moreover, it is strange how developers so easily dismiss the “race” bias in SAN. Developers have stated: “In the Chilean context, frankly, we do not see similar concerns regarding race” (Vaithianathan et al, 2019). It is difficult to understand the reason for this statement. On the one hand, as we have already mentioned in this document, public criticism from organizations close to children’s rights specifically points out racial discrimination as an issue. In addition, there is abundant literature on racism and xenophobia in Chile (Tijoux, 2016). Likewise, they do not even examine the problem of bias from the point of view of the development of the system, and not only from the data collection. These omissions really cause concern, especially considering the evidence on how dangerous the racist idea of believing that algorithms do not see skin color may be (Benjamin, 2019).

Another argument defended by the developers is the “social license” for the use of personal data, which would consist of society legitimating the use of such data. In their opinion, the criticism directed against the Chilean personal data protection law (enacted in a time when Internet use was not massive) may predispose SAN to criticism. However, in addition to complying with the low standards of Chilean legislation, what would be important for the operation of Artificial Intelligence systems would be the “social license”:

“Fortunately, in Chile, the problem of obtaining a social license is somewhat simpler as it is the citizens’ custom to have their data processed to stratify their socioeconomic status for the purpose of targeting social benefits. This practice has had a strong social license due to the benefits it brings to families in need.” (Vaithianathan et al, 2019).

In the context of data extractivism required for Artificial Intelligence, it is interesting that developers use the concept of “social license,” which was popularized by the United Nations in 2004 to encourage companies (mainly in the extractive sectors) to involve the indigenous peoples in their projects and to obtain their consent before implementing them. Consent has to be free, prior, and informed. It is worth asking whether in SAN there is proper consent and, therefore, “social license,” considering that the use of personal data from “beneficiaries” goes hand in hand with State support, there’s not any explicit mention of risk rankings of vulnerability when the State look for consent form families (Valderrama, 2021), and when its developers seem to believe that “custom” of providing personal data to the State is a good source of legitimacy.

Conclusion

The vulnerable childhood approach is a classic neoliberal take, enforced by entities like the World Bank in the region, and it comes from the idea of poverty as an individual problem (not a systemic one) and caseworkers as protectors of people “at-risk” (Muñoz Arce, 2018). SAN, in this context, is an excellent opportunity to think if these systems let the State arrive earlier and better prepared to children and adolescents on social risk or at the end only follow the neoliberal principle of controlling social expenditure making more efficient individual interventions. Whatever the answer is, it is essential to question how this technology automates and grants the degree of technological resolutions to ideological decisions based on neoliberal principles (Alston, 2019).

In continuity with neoliberal logic, SAN responds to the technocratization of public policies. The degrees of participation in their design and the process’s transparency are doubtful, which results in an impact on democracy itself. In particular, the same people affected by these systems, that is, poor children, adolescents, and their families, are not even the subject of consultation since they are not recognized as interested parties. Likewise, the consent to use their data submitted to get or not State support or not even informing about this predictive social risk ranking opens a whole discussion about the ethics of these systems that have not been resolved.

The hasty adoption of these neoliberal instruments is also worrisome because it is hard to dissolve them once adopted (Eubanks, 2018). This is even more concerning if we consider the protests against neoliberal policies in several countries in the region in the last years, which have ended up even in a new constitutional process, as is Chile’s case.

Likewise, there is evidence that the use of AI to predict possible vulnerabilities not only does not work well in the social care of children and adolescents (Clayton et al., 2020) but also ends up being quite costly for the States, at least in the firsts stages which seems to be contrary to the neoliberal doctrine (Bright et al. 2019).

Although there is the notion of technologies as a sociotechnical system, AI’s conception of an objective process prevails, both in data collection and processing. This is concerning in several ways. First, because states and developers pay little attention to bias in social class and race, repeating the racial idea of colorblindness (Noble, 2018; Benjamin, 2019).

Second, technology’s errors in the risk prediction amongst children and adolescents are expected to be shielded by human intervention, giving the machine some impunity to continue working. However, there are no field studies in the case we have examined that consider how caseworkers who interact with the machine deal with “automation bias” referred to the higher valorization of automated information than our own experiences (Bridle, 2018).

We urge feminists to examine SAN and these social-risk models at least in two aspects: first, to question if big data and Artificial Intelligence could ever reflect structural elements that influence the risk of vulnerability and social inequalities of our societies, or is just a way to objectivize the responsibility of individuals through their data trajectory at the State. And second, how SAN and these social-risk models are a continuation of the idea of data disembodiment, where technologies artificially abstract bodies, identities, and interactions from social contexts to obscure their operation as a tool for social control, aggravating its consequences on social inequalities (Monahan, 2019).

Oppressive A.I. Framework by Joana Varon and Paz Peña. Design by Clarote for notmy.ai

Transparency requests submitted to Chilean government, answers and official documents about the project:

 

Bibliography

Alston, Philip. 2019. Report of the Special rapporteur on extreme poverty and human rights. Promotion and protection of human rights: Human rights questions, including alternative approaches for improving the effective enjoyment of human rights and fundamental freedoms. A/74/48037. Seventy-fourth session. Item 72(b) of the provisional agenda.

Ballantyne, N. (2019). THE ETHICS AND POLITICS OF HUMAN SERVICE TECHNOLOGY: THE CASE OF PREDICTIVE RISK MODELING IN NEW ZEALAND’S CHILD PROTECTION SYSTEM. January 2019The Hong Kong Journal of Social Work 53(01n02):15-27 DOI: 10.1142/S0219246219000044

Bridle, J. (2018). New Dark Age: Technology, Knowledge and the End of the Future. Verso Books.

Bright, J., Bharath, G., Seidelin, C., & Vogl, T. M. (2019).  Data Science for Local Government (April 11, 2019). Available at SSRN: https://ssrn.com/abstract=3370217 or http://dx.doi.org/10.2139/ssrn.3370217 

Clayton, V., Sanders, M., Schoenwald, E., Surkis, L. &  Gibbons, D. (2020) MACHINE LEARNING IN CHILDREN’S SERVICES SUMMARY REPORT. What Works For Children’s Social Care. UK.

Cubillos Celis, P. (2019). Neoliberalismo, focalización e infancia en Chile: 1973-2010. Revista Mexicana de Sociologí­a, 81(3). doi:http://dx.doi.org/10.22201/iis.01882503p.2019.3.57923

Monahan, T. (2009). Dreams of Control at a Distance: Gender, Surveillance, and Social Control. Cultural Studies. Critical Methodologies, 9(2), 286–305. https://doi.org/10.1177/1532708608321481

Muñoz Arce, G. (2018): The neoliberal turn in Chilean social work: frontline struggles against individualism and fragmentation, European Journal of Social Work, DOI: 10.1080/13691457.2018.1529657

Schöngut, N. (2017). Ensamblajes socio-técnicos para la producción de intervenciones psicosociales en un programa del Servicio Nacional de Menores de Chile. Psicoperspectivas. Individuo y Sociedad, Vol. 16, No. 3.

Sociedad Civil de Chile Defensora de los Derechos Humanos del Niño et al. (2019, 28 enero). Día Internacional de la protección de datos. Carta abierta de la Sociedad Civil de Chile Defensora de los Derechos Humanos del Niño. ONG Emprender con Alas.

Tijoux, M.E. (2016). Racismo en Chile. La piel como marca de la inmigración. Editorial: Universitaria. Santiago, Chile.

Vaithianathan, R., Benavides, D., Taskova, K., Hermosilla, MP., Letelier, A., Escobar, J., Garretón, M. & Kulick, E. (2019). Informe Final: Propuesta de instrumento de focalización para la identificación de niños, niñas y adolescentes en riesgo de vulneración de derechos. “Construcción del Instrumento de Focalización Alerta de Niñez”. 730566-13-LQ18. 27 de junio de 2019. Universidad Adolfo Ibáñez (UAI), Chile y Auckland University of Technology (AUT), Nueva Zelanda.

Valderrama, M. (2021). Chile: Sistema Alerta Niñez y la predicción del riesgo de vulneración de derechos de la infancia. Derechos Digitales.