Monday, July 29, 2024

Poor South African households can’t afford nutritious food – what can be done


 
 

Food insecurity is a feature of life for millions of South Africans. Food insecurity refers to a lack of regular access to enough safe and nutritious food for average growth and development and an active and healthy life. This may be due to unavailability of food or a lack of resources to buy it.

The extent of this was recently mapped by the Human Sciences Research Council. For example, in Gauteng province, South Africa’s economic powerhouse, 51% of households experience food insecurity. A national survey between 2021 and 2023 found Gauteng households were affected to different degrees: 14% faced severe food insecurity, 20% moderate food insecurity, and 17% mild food insecurity.

The research also found that South African households survive on nutrient-poor food groups such as cereals, condiments, sugars, oils and fats. Consumption of nutrient-rich food groups such as fruits, pulses, nuts, eggs, fish and seafood is limited.

Dietary diversity is useful for measuring food security. A diverse, nutritious and balanced diet prevents nutritional deficiencies and diseases. A fall in dietary diversity is linked to a rise in the proportion of people who are malnourished.

The HSRC findings were the most recent to point to a growing crisis of food insecurity in the country. In an earlier study, we examined the diets of people living in South Africa’s second largest city, Tshwane.

 

We found that due to income, and other socio-economic factors, none of the poor households in our study were getting adequate nutrients from what they were eating. Mostly they were eating cereals (grains such as wheat and maize), vegetables such as legumes, roots and tubers, and oils and fats because they couldn’t afford anything else. Most had little to no income. And most were poorly nourished.

On the basis of the findings we recommended a range of interventions. These included better implementation of existing policies aimed at opening up opportunities, such as the Expanded Public Works Programme. And we recommended campaigns be run to increase people’s awareness about nutritional foods and growing them.

The help of the private sector and NGOs is also recommended.

Mapping eating habits

The study measured what households were choosing to eat from among the various food groups. These were 775 households from food-insecure areas of Tshwane as mapped by the 2016 Statistics South Africa Community Survey. We asked which food groups had been eaten in the previous seven days by any household member at home, including food prepared at home but eaten, for example, at work (such as a packed lunch).

Twelve food choices were used: cereals, legumes, roots and tubers, vegetables, fruits, milk, eggs, meat, fats, fish, sweets and beverages.

The Principal Component Analysis was used to analyse the Tshwane study households’ consumption of these food groups. The analysis is widely used to derive dietary patterns from the daily eating patterns of households.

We then used the Simpson index to measure how nutrient rich the diets were. An index score greater than 0.5 shows a highly diversified diet. The Simpson Index was profiled on socio-economic determinants of food insecurity variables, such as age, household size, income, and food expenditure.

The average dietary diversity score was low for all the poor households in the Tshwane study.

What we found

In our survey, the households with the least diversified diets were those headed by women, people with no more than secondary education, unemployed people, and/or recipients of the social support grant. This suggests that the grants are insufficient to cover people’s food needs.

Households with low dietary diversity rarely reared animals or had food gardens.

Households chose mainly four food groups. The first group was associated with a vegetable-based diet (roots and tubers, legumes, vegetables, and fruit). The second group was associated with people who consumed sugar, honey and miscellaneous products (coffee, tea, soft drinks, and instant foods). The third group comprised people who consumed fats and proteins, eggs and milk products. The final group was associated with the consumption of cereals or staple foods.

It was not possible to identify a group of urban food insecure households that ate a mixed selection of all food groups. This suggests they all lacked adequate dietary requirements to boost nutrition. This was regardless of their socio-economic status.

We also found that households faced a number of obstacles beyond income constraints that limited their dietary diversity. These included unemployment, household size, education, and the lack of land, skills and resources to practise urban agriculture.

Next steps

The City of Tshwane in Gauteng has adopted numerous strategies to alleviate food insecurity. These include the 2017 climate response strategy, meant to reduce food insecurity due to climate vulnerability, and the Expanded Public Works Programme, which provides job creation and skills development, supporting sustainable socio-economic development and poverty reduction.

But execution has been suboptimal.

Poor coordination among government departments and agencies about priorities has led to interventions being ineffective.

A list of the challenges facing the public works programme was presented by a parliamentary monitoring group. They included delays in implementation and reporting as well as non-submission of quarterly evaluation reports by some public bodies.

As a result, funding from the central government to local and provincial administrations was withheld for the 2024/25 budget.

In addition, the City of Tshwane faces financial challenges that affect its ability to get things done. This means the city cannot solve the problem of food insecurity and nutrition alone.

What’s needed is collaboration between government, the private sector and civil society. Policies could include prioritising food security and nutrition, such as subsidies for nutritious foods, regulations to improve food safety and incentives for sustainable agricultural practices.

Second, public-private partnership must be established to implement food security programmes targeting marginalised households, urban smart agriculture, and community gardens.

Third, there is a need to fund research on innovative food production technologies and sustainable agriculture practices, and share industry data on food supply chains and consumer preferences.

Fourth, there is a need to conduct community-based research on food needs and barriers to access.

Fifth, contribution of financial resources by investing in startups and enterprises focused on improving food security and nutrition outcomes.

Finally, monitoring and evaluation frameworks need to be established to assess the impact of food security policies and programmes and ensure accountability and transparency in resource allocation.The Conversation

Adrino Mazenda, Senior Researcher, Associate Professor Economic Management Sciences, University of Pretoria

This article is republished from The Conversation under a Creative Commons license. Read the original article.

South Africa is trying to put a stop to the abuse of its intelligence agencies - what still needs fixing

 A man points at a CCTV screen while talking on a cellphone.


South Africa’s security laws are open to abuse by rogue intelligence operatives and politicians. These laws are meant to govern the conduct of covert activities by intelligence agencies and oversight mechanisms. But weaknesses have been exploited to spy on citizens and for political ends.

South Africa has four official intelligence agencies. They are:

The interception of communications judge grants permission to the above agencies to intercept communications.

Under former president Jacob Zuma (2009-2018), the State Security Agency resorted too quickly to covert operations. It used them in inappropriate situations and interfered with legitimate political activities.

President Cyril Ramaphosa then embarked on a reform process to end the abuses and ensure proper oversight over the intelligence agencies. In 2018 he appointed a high level panel to review the work of the State Security Agency and propose reforms.

The 2023/4 report of parliament’s Joint Standing Committee on Intelligence details how the committee has strengthened oversight following Ramaphosa’s intervention. This, by requiring that the state intelligence agencies comply with legislative prescripts.

According to the committee’s annual report, the number of applications for permission to intercept communication has gone down in the past year. That’s because the surveillance now has to comply with a strengthened Regulation of Interception of Communication and Provision of Communication Related Information Act (Rica).

The act requires that all cellphone sim cards in the country be registered. It also makes it illegal to monitor communications (even to eavesdrop on a phone call) without a judge’s permission.

Perhaps the decline in applications to intercept communications is because this covert, intrusive power is so well regulated now relative to other covert powers. The danger is that abuse of less powers that are not well-regulated may continue under the unity government.

I have researched intelligence and surveillance for over a decade. I also served on the 2018 High Level Review Panel on the State Security Agency.

In my view, the intelligence committee’s report reveals important areas of weakness. The new parliamentary intelligence oversight committee needs to address them.

Litany of intelligence abuses

The most serious of these weaknesses is that most covert intrusive powers remain poorly defined. Communication surveillance, search of premises and seizure of property are exceptions. And powers are poorly regulated and audited for the State Security Agency, Crime Intelligence and Defence Intelligence. Failure to address this problem creates scope for the abuses that occurred under Zuma to recur.

The high level review panel and the State Capture Commission detailed how the State Security Agency’s special operations division ran what appeared to be “special purpose vehicles to siphon funds” from the agency.

Other abuses included:

History of intelligence abuse

As far back as 2008, the Matthews Commission of Inquiry investigated abuses in what was then the domestic branch of intelligence, the National Intelligence Agency.

The commission argued that legislation should state that intrusive methods should be used only when there were reasonable grounds to believe that a serious criminal offence had been, was being or was likely to be committed.

It said such intrusive methods should be used only when the intelligence is necessary and cannot be obtained by other means. Also, intelligence offers seeking to use intrusive powers should seek a warrant to do so.

Covert intelligence operations

Intelligence agencies may legally use intrusive means in secret. These include:

  • deception, to uncover covert criminal and terrorism activities that threaten national security

  • deploying intelligence agents to infiltrate criminal networks using fake identities

  • placing their targets under physical or electronic surveillance

  • engaging in covert action to disrupt their activities.

As the powers used in covert intelligence operations are invasive and threaten privacy, state intelligence agencies should only use them in exceptional circumstances. These could be where actors pose a particularly high risk to national security and cannot be stopped in any other way.

What needs fixing

The new parliamentary intelligence committee must address the inadequate regulation of covert powers. The drafters of the General Intelligence Laws Amendment Bill, 2003 have attempted to address the problem.

They called on the then incoming seventh parliament to set up an evaluation committee in terms of the Secret Services Act within a year. It is to evaluate covert projects funded in terms of the act.

However, this committee will not be a sufficient check on these powers. That’s because it merely needs to be satisfied that the intended projects are in the national interest. That’s a vague term, open to abuse.

Legislation needs to limit the uses of covert powers, like Rica limits the interception of communications.

Another problem that emerges from the previous intelligence committee’s report is that the auditor-general does not have complete access to information about covert operations. This led to the State Security Agency receiving qualified audits as a matter of course. The agency has argued that providing the information could hamper its work.

This happens even though the staff in the Auditor-General’s office responsible auditing the agency staff has top secret security clearance. The High Level Review Panel also expressed discomfort with normalising qualified audits.

The auditor-general should be empowered to access the information necessary to perform financial and performance audits. The Inspector General of Intelligence, which monitors and reviews the operations of the intelligence services, could assist by interpreting the non-financial information the auditor-general needs to evaluate performance.

Having to account for spending on covert operations would make it more difficult for the intelligence agencies to abuse their powers.The Conversation

Jane Duncan, Professor of Digital Society, University of Glasgow

This article is republished from The Conversation under a Creative Commons license.

Saturday, April 20, 2024

Understanding AI outputs: study shows pro-western cultural bias in the way AI decisions are explained

 

AI models’ outputs need to be properly explained to the people affected. DrAfter123/Getty Images

Humans are increasingly using artificial intelligence (AI) to inform decisions about our lives. AI is, for instance, helping to make hiring choices and offer medical diagnoses.

If you were affected, you might want an explanation of why an AI system produced the decision it did. Yet AI systems are often so computationally complex that not even their designers fully know how the decisions were produced. That’s why the development of “explainable AI” (or XAI) is booming. Explainable AI includes systems that are either themselves simple enough to be fully understood by people, or that produce easily understandable explanations of other, more complex AI models’ outputs.

Explainable AI systems help AI engineers to monitor and correct their models’ processing. They also help users to make informed decisions about whether to trust or how best to use AI outputs.

Not all AI systems need to be explainable. But in high-stakes domains, we can expect XAI to become widespread. For instance, the recently adopted European AI Act, a forerunner for similar laws worldwide, protects a “right to explanation”. Citizens have a right to receive an explanation about an AI decision that affects their other rights.

But what if something like your cultural background affects what explanations you expect from an AI?

In a recent systematic review we analysed over 200 studies from the last ten years (2012–2022) in which the explanations given by XAI systems were tested on people. We wanted to see to what extent researchers indicated awareness of cultural variations that were potentially relevant for designing satisfactory explainable AI.

Our findings suggest that many existing systems may produce explanations that are primarily tailored to individualist, typically western, populations (for instance, people in the US or UK). Also, most XAI user studies only sampled western populations, but unwarranted generalisations of results to non-western populations were pervasive.

Cultural differences in explanations

There are two common ways to explain someone’s actions. One involves invoking the person’s beliefs and desires. This explanation is internalist, focused on what’s going on inside someone’s head. The other is externalist, citing factors like social norms, rules, or other factors that are outside the person.

To see the difference, think about how we might explain a driver’s stopping at a red traffic light. We could say, “They believe that the light is red and don’t want to violate any traffic rules, so they decided to stop.” This is an internalist explanation. But we could also say, “The lights are red and the traffic rules require that drivers stop at red lights, so the driver stopped.” This is an externalist explanation.

Many psychological studies suggest internalist explanations are preferred in “individualistic” countries where people often view themselves as more independent from others. These countries tend to be in the west, educated, industrialised, rich, and democratic.

However, such explanations are not obviously preferred over externalist explanations in “collectivist” societies, such as those commonly found across Africa or south Asia, where people often view themselves as interdependent.

Preferences in explaining behaviour are relevant for what a successful XAI output could be. An AI that offers a medical diagnosis might be accompanied by an explanation such as: “Since your symptoms are fever, sore throat and headache, the classifier thinks you have flu.” This is internalist because the explanation invokes an “internal” state of the AI – what it “thinks” – albeit metaphorically. Alternatively, the diagnosis could be accompanied by an explanation that does not mention an internal state, such as: “Since your symptoms are fever, sore throat and headache, based on its training on diagnostic inclusion criteria, the classifier produces the output that you have flu.” This is externalist. The explanation draws on “external” factors like inclusion criteria, similar to how we might explain stopping at a traffic light by appealing to the rules of the road.

If people from different cultures prefer different kinds of explanations, this matters for designing inclusive systems of explainable AI.

Our research, however, suggests that XAI developers are not sensitive to potential cultural differences in explanation preferences.

Overlooking cultural differences

A striking 93.7% of the studies we reviewed did not indicate awareness of cultural variations potentially relevant to designing explainable AI. Moreover, when we checked the cultural background of the people tested in the studies, we found 48.1% of the studies did not report on cultural background at all. This suggests that researchers did not consider cultural background to be a factor that could influence the generalisability of results.

Of those that did report on cultural background, 81.3% only sampled western, industrialised, educated, rich and democratic populations. A mere 8.4% sampled non-western populations and 10.3% sampled mixed populations.

Sampling only one kind of population need not be a problem if conclusions are limited to that population, or researchers give reasons to think other populations are similar. Yet, out of the studies that reported on cultural background, 70.1% extended their conclusions beyond the study population – to users, people, humans in general – and most studies did not contain evidence of reflection on cultural similarity.

To see how deep the oversight of culture runs in explainable AI research, we added a systematic “meta” review of 34 existing literature reviews of the field. Surprisingly, only two reviews commented on western-skewed sampling in user research, and only one review mentioned overgeneralisations of XAI study findings.

This is problematic.

Why the results matter

If findings about explainable AI systems only hold for one kind of population, these systems may not meet the explanatory requirements of other people affected by or using them. This can diminish trust in AI. When AI systems make high-stakes decisions but don’t give you a satisfactory explanation, you’ll likely distrust them even if their decisions (such as medical diagnoses) are accurate and important for you.

To address this cultural bias in XAI, developers and psychologists should collaborate to test for relevant cultural differences. We also recommend that cultural backgrounds of samples be reported with XAI user study findings.

Researchers should state whether their study sample represents a wider population. They may also use qualifiers like “US users” or “western participants” in reporting their findings.

As AI is being used worldwide to make important decisions, systems must provide explanations that people from different cultures find acceptable. As it stands, large populations who could benefit from the potential of explainable AI risk being overlooked in XAI research.The Conversation

Mary Carman, Senior Lecturer in Philosophy, University of the Witwatersrand and Uwe Peters, Assistant Professor of Philosophy, Utrecht University

This article is republished from The Conversation under a Creative Commons license.

Friday, December 1, 2023

Merriam-Webster’s word of the year – authentic – reflects growing concerns over AI’s ability to deceive and dehumanize

 

According to the publisher’s editor-at-large, 2023 represented ‘a kind of crisis of authenticity.’ lambada/E+ via Getty Images

When Merriam-Webster announced that its word of the year for 2023 was “authentic,” it did so with over a month to go in the calendar year.

Even then, the dictionary publisher was late to the game.

In a lexicographic form of Christmas creep, Collins English Dictionary announced its 2023 word of the year, “AI,” on Oct. 31. Cambridge University Press followed suit on Nov. 15 with “hallucinate,” a word used to refer to incorrect or misleading information provided by generative AI programs.

At any rate, terms related to artificial intelligence appear to rule the roost, with “authentic” also falling under that umbrella.

AI and the authenticity crisis

For the past 20 years, Merriam-Webster, the oldest dictionary publisher in the U.S., has chosen a word of the year – a term that encapsulates, in one form or another, the zeitgeist of that past year. In 2020, the word was “pandemic.” The next year’s winner? “Vaccine.”

“Authentic” is, at first glance, a little less obvious.

According to the publisher’s editor-at-large, Peter Sokolowski, 2023 represented “a kind of crisis of authenticity.” He added that the choice was also informed by the number of online users who looked up the word’s meaning throughout the year.

Print ad with a drawing of a thick book accompanied by the text, 'The One Great Standard Authority.'
A 1906 print ad for Webster’s International Dictionary advertised itself an an authoritative clearinghouse for all things English – an authentic, reliable source. Jay Paull/Getty Images

The word “authentic,” in the sense of something that is accurate or authoritative, has its roots in French and Latin. The Oxford English Dictionary has identified its usage in English as early as the late 14th century.

And yet the concept – particularly as it applies to human creations and human behavior – is slippery.

Is a photograph made from film more authentic than one made from a digital camera? Does an authentic scotch have to be made at a small-batch distillery in Scotland? When socializing, are you being authentic – or just plain rude – when you skirt niceties and small talk? Does being your authentic self mean pursuing something that feels natural, even at the expense of cultural or legal constraints?

The more you think about it, the more it seems like an ever-elusive ideal – one further complicated by advances in artificial intelligence.

How much human touch?

Intelligence of the artificial variety – as in nonhuman, inauthentic, computer-generated intelligence – was the technology story of the past year.

At the end of 2022, OpenAI publicly released ChatGPT 3.5, a chatbot derived from so-called large language models. It was widely seen as a breakthrough in artificial intelligence, but its rapid adoption led to questions about the accuracy of its answers.

The chatbot also became popular among students, which compelled teachers to grapple with how to ensure their assignments weren’t being completed by ChatGPT.

Issues of authenticity have arisen in other areas as well. In November 2023, a track described as the “last Beatles song” was released. “Now and Then” is a compilation of music originally written and performed by John Lennon in the 1970s, with additional music recorded by the other band members in the 1990s. A machine learning algorithm was recently employed to separate Lennon’s vocals from his piano accompaniment, and this allowed a final version to be released.

But is it an authentic “Beatles” song? Not everyone is convinced.

Advances in technology have also allowed the manipulation of audio and video recordings. Referred to as “deepfakes,” such transformations can make it appear that a celebrity or a politician said something that they did not – a troubling prospect as the U.S. heads into what is sure to be a contentious 2024 election season.

Writing for The Conversation in May 2023, education scholar Victor R. Lee explored the AI-fueled authenticity crisis.

Our judgments of authenticity are knee-jerk, he explained, honed over years of experience. Sure, occasionally we’re fooled, but our antennae are generally reliable. Generative AI short-circuits this cognitive framework.

“That’s because back when it took a lot of time to produce original new content, there was a general assumption … that it only could have been made by skilled individuals putting in a lot of effort and acting with the best of intentions,” he wrote.

“These are not safe assumptions anymore,” he added. “If it looks like a duck, walks like a duck and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.”

Though there seems to be a general understanding that human minds and human hands must play some role in creating something authentic or being authentic, authenticity has always been a difficult concept to define.

So it’s somewhat fitting that as our collective handle on reality has become ever more tenuous, an elusive word for an abstract ideal is Merriam-Webster’s word of the year.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license.