Monday, July 29, 2024

South Africa is trying to put a stop to the abuse of its intelligence agencies - what still needs fixing

 A man points at a CCTV screen while talking on a cellphone.


South Africa’s security laws are open to abuse by rogue intelligence operatives and politicians. These laws are meant to govern the conduct of covert activities by intelligence agencies and oversight mechanisms. But weaknesses have been exploited to spy on citizens and for political ends.

South Africa has four official intelligence agencies. They are:

The interception of communications judge grants permission to the above agencies to intercept communications.

Under former president Jacob Zuma (2009-2018), the State Security Agency resorted too quickly to covert operations. It used them in inappropriate situations and interfered with legitimate political activities.

President Cyril Ramaphosa then embarked on a reform process to end the abuses and ensure proper oversight over the intelligence agencies. In 2018 he appointed a high level panel to review the work of the State Security Agency and propose reforms.

The 2023/4 report of parliament’s Joint Standing Committee on Intelligence details how the committee has strengthened oversight following Ramaphosa’s intervention. This, by requiring that the state intelligence agencies comply with legislative prescripts.

According to the committee’s annual report, the number of applications for permission to intercept communication has gone down in the past year. That’s because the surveillance now has to comply with a strengthened Regulation of Interception of Communication and Provision of Communication Related Information Act (Rica).

The act requires that all cellphone sim cards in the country be registered. It also makes it illegal to monitor communications (even to eavesdrop on a phone call) without a judge’s permission.

Perhaps the decline in applications to intercept communications is because this covert, intrusive power is so well regulated now relative to other covert powers. The danger is that abuse of less powers that are not well-regulated may continue under the unity government.

I have researched intelligence and surveillance for over a decade. I also served on the 2018 High Level Review Panel on the State Security Agency.

In my view, the intelligence committee’s report reveals important areas of weakness. The new parliamentary intelligence oversight committee needs to address them.

Litany of intelligence abuses

The most serious of these weaknesses is that most covert intrusive powers remain poorly defined. Communication surveillance, search of premises and seizure of property are exceptions. And powers are poorly regulated and audited for the State Security Agency, Crime Intelligence and Defence Intelligence. Failure to address this problem creates scope for the abuses that occurred under Zuma to recur.

The high level review panel and the State Capture Commission detailed how the State Security Agency’s special operations division ran what appeared to be “special purpose vehicles to siphon funds” from the agency.

Other abuses included:

History of intelligence abuse

As far back as 2008, the Matthews Commission of Inquiry investigated abuses in what was then the domestic branch of intelligence, the National Intelligence Agency.

The commission argued that legislation should state that intrusive methods should be used only when there were reasonable grounds to believe that a serious criminal offence had been, was being or was likely to be committed.

It said such intrusive methods should be used only when the intelligence is necessary and cannot be obtained by other means. Also, intelligence offers seeking to use intrusive powers should seek a warrant to do so.

Covert intelligence operations

Intelligence agencies may legally use intrusive means in secret. These include:

  • deception, to uncover covert criminal and terrorism activities that threaten national security

  • deploying intelligence agents to infiltrate criminal networks using fake identities

  • placing their targets under physical or electronic surveillance

  • engaging in covert action to disrupt their activities.

As the powers used in covert intelligence operations are invasive and threaten privacy, state intelligence agencies should only use them in exceptional circumstances. These could be where actors pose a particularly high risk to national security and cannot be stopped in any other way.

What needs fixing

The new parliamentary intelligence committee must address the inadequate regulation of covert powers. The drafters of the General Intelligence Laws Amendment Bill, 2003 have attempted to address the problem.

They called on the then incoming seventh parliament to set up an evaluation committee in terms of the Secret Services Act within a year. It is to evaluate covert projects funded in terms of the act.

However, this committee will not be a sufficient check on these powers. That’s because it merely needs to be satisfied that the intended projects are in the national interest. That’s a vague term, open to abuse.

Legislation needs to limit the uses of covert powers, like Rica limits the interception of communications.

Another problem that emerges from the previous intelligence committee’s report is that the auditor-general does not have complete access to information about covert operations. This led to the State Security Agency receiving qualified audits as a matter of course. The agency has argued that providing the information could hamper its work.

This happens even though the staff in the Auditor-General’s office responsible auditing the agency staff has top secret security clearance. The High Level Review Panel also expressed discomfort with normalising qualified audits.

The auditor-general should be empowered to access the information necessary to perform financial and performance audits. The Inspector General of Intelligence, which monitors and reviews the operations of the intelligence services, could assist by interpreting the non-financial information the auditor-general needs to evaluate performance.

Having to account for spending on covert operations would make it more difficult for the intelligence agencies to abuse their powers.The Conversation

Jane Duncan, Professor of Digital Society, University of Glasgow

This article is republished from The Conversation under a Creative Commons license.

Saturday, April 20, 2024

Understanding AI outputs: study shows pro-western cultural bias in the way AI decisions are explained

 

AI models’ outputs need to be properly explained to the people affected. DrAfter123/Getty Images

Humans are increasingly using artificial intelligence (AI) to inform decisions about our lives. AI is, for instance, helping to make hiring choices and offer medical diagnoses.

If you were affected, you might want an explanation of why an AI system produced the decision it did. Yet AI systems are often so computationally complex that not even their designers fully know how the decisions were produced. That’s why the development of “explainable AI” (or XAI) is booming. Explainable AI includes systems that are either themselves simple enough to be fully understood by people, or that produce easily understandable explanations of other, more complex AI models’ outputs.

Explainable AI systems help AI engineers to monitor and correct their models’ processing. They also help users to make informed decisions about whether to trust or how best to use AI outputs.

Not all AI systems need to be explainable. But in high-stakes domains, we can expect XAI to become widespread. For instance, the recently adopted European AI Act, a forerunner for similar laws worldwide, protects a “right to explanation”. Citizens have a right to receive an explanation about an AI decision that affects their other rights.

But what if something like your cultural background affects what explanations you expect from an AI?

In a recent systematic review we analysed over 200 studies from the last ten years (2012–2022) in which the explanations given by XAI systems were tested on people. We wanted to see to what extent researchers indicated awareness of cultural variations that were potentially relevant for designing satisfactory explainable AI.

Our findings suggest that many existing systems may produce explanations that are primarily tailored to individualist, typically western, populations (for instance, people in the US or UK). Also, most XAI user studies only sampled western populations, but unwarranted generalisations of results to non-western populations were pervasive.

Cultural differences in explanations

There are two common ways to explain someone’s actions. One involves invoking the person’s beliefs and desires. This explanation is internalist, focused on what’s going on inside someone’s head. The other is externalist, citing factors like social norms, rules, or other factors that are outside the person.

To see the difference, think about how we might explain a driver’s stopping at a red traffic light. We could say, “They believe that the light is red and don’t want to violate any traffic rules, so they decided to stop.” This is an internalist explanation. But we could also say, “The lights are red and the traffic rules require that drivers stop at red lights, so the driver stopped.” This is an externalist explanation.

Many psychological studies suggest internalist explanations are preferred in “individualistic” countries where people often view themselves as more independent from others. These countries tend to be in the west, educated, industrialised, rich, and democratic.

However, such explanations are not obviously preferred over externalist explanations in “collectivist” societies, such as those commonly found across Africa or south Asia, where people often view themselves as interdependent.

Preferences in explaining behaviour are relevant for what a successful XAI output could be. An AI that offers a medical diagnosis might be accompanied by an explanation such as: “Since your symptoms are fever, sore throat and headache, the classifier thinks you have flu.” This is internalist because the explanation invokes an “internal” state of the AI – what it “thinks” – albeit metaphorically. Alternatively, the diagnosis could be accompanied by an explanation that does not mention an internal state, such as: “Since your symptoms are fever, sore throat and headache, based on its training on diagnostic inclusion criteria, the classifier produces the output that you have flu.” This is externalist. The explanation draws on “external” factors like inclusion criteria, similar to how we might explain stopping at a traffic light by appealing to the rules of the road.

If people from different cultures prefer different kinds of explanations, this matters for designing inclusive systems of explainable AI.

Our research, however, suggests that XAI developers are not sensitive to potential cultural differences in explanation preferences.

Overlooking cultural differences

A striking 93.7% of the studies we reviewed did not indicate awareness of cultural variations potentially relevant to designing explainable AI. Moreover, when we checked the cultural background of the people tested in the studies, we found 48.1% of the studies did not report on cultural background at all. This suggests that researchers did not consider cultural background to be a factor that could influence the generalisability of results.

Of those that did report on cultural background, 81.3% only sampled western, industrialised, educated, rich and democratic populations. A mere 8.4% sampled non-western populations and 10.3% sampled mixed populations.

Sampling only one kind of population need not be a problem if conclusions are limited to that population, or researchers give reasons to think other populations are similar. Yet, out of the studies that reported on cultural background, 70.1% extended their conclusions beyond the study population – to users, people, humans in general – and most studies did not contain evidence of reflection on cultural similarity.

To see how deep the oversight of culture runs in explainable AI research, we added a systematic “meta” review of 34 existing literature reviews of the field. Surprisingly, only two reviews commented on western-skewed sampling in user research, and only one review mentioned overgeneralisations of XAI study findings.

This is problematic.

Why the results matter

If findings about explainable AI systems only hold for one kind of population, these systems may not meet the explanatory requirements of other people affected by or using them. This can diminish trust in AI. When AI systems make high-stakes decisions but don’t give you a satisfactory explanation, you’ll likely distrust them even if their decisions (such as medical diagnoses) are accurate and important for you.

To address this cultural bias in XAI, developers and psychologists should collaborate to test for relevant cultural differences. We also recommend that cultural backgrounds of samples be reported with XAI user study findings.

Researchers should state whether their study sample represents a wider population. They may also use qualifiers like “US users” or “western participants” in reporting their findings.

As AI is being used worldwide to make important decisions, systems must provide explanations that people from different cultures find acceptable. As it stands, large populations who could benefit from the potential of explainable AI risk being overlooked in XAI research.The Conversation

Mary Carman, Senior Lecturer in Philosophy, University of the Witwatersrand and Uwe Peters, Assistant Professor of Philosophy, Utrecht University

This article is republished from The Conversation under a Creative Commons license.

Friday, December 1, 2023

Merriam-Webster’s word of the year – authentic – reflects growing concerns over AI’s ability to deceive and dehumanize

 

According to the publisher’s editor-at-large, 2023 represented ‘a kind of crisis of authenticity.’ lambada/E+ via Getty Images

When Merriam-Webster announced that its word of the year for 2023 was “authentic,” it did so with over a month to go in the calendar year.

Even then, the dictionary publisher was late to the game.

In a lexicographic form of Christmas creep, Collins English Dictionary announced its 2023 word of the year, “AI,” on Oct. 31. Cambridge University Press followed suit on Nov. 15 with “hallucinate,” a word used to refer to incorrect or misleading information provided by generative AI programs.

At any rate, terms related to artificial intelligence appear to rule the roost, with “authentic” also falling under that umbrella.

AI and the authenticity crisis

For the past 20 years, Merriam-Webster, the oldest dictionary publisher in the U.S., has chosen a word of the year – a term that encapsulates, in one form or another, the zeitgeist of that past year. In 2020, the word was “pandemic.” The next year’s winner? “Vaccine.”

“Authentic” is, at first glance, a little less obvious.

According to the publisher’s editor-at-large, Peter Sokolowski, 2023 represented “a kind of crisis of authenticity.” He added that the choice was also informed by the number of online users who looked up the word’s meaning throughout the year.

Print ad with a drawing of a thick book accompanied by the text, 'The One Great Standard Authority.'
A 1906 print ad for Webster’s International Dictionary advertised itself an an authoritative clearinghouse for all things English – an authentic, reliable source. Jay Paull/Getty Images

The word “authentic,” in the sense of something that is accurate or authoritative, has its roots in French and Latin. The Oxford English Dictionary has identified its usage in English as early as the late 14th century.

And yet the concept – particularly as it applies to human creations and human behavior – is slippery.

Is a photograph made from film more authentic than one made from a digital camera? Does an authentic scotch have to be made at a small-batch distillery in Scotland? When socializing, are you being authentic – or just plain rude – when you skirt niceties and small talk? Does being your authentic self mean pursuing something that feels natural, even at the expense of cultural or legal constraints?

The more you think about it, the more it seems like an ever-elusive ideal – one further complicated by advances in artificial intelligence.

How much human touch?

Intelligence of the artificial variety – as in nonhuman, inauthentic, computer-generated intelligence – was the technology story of the past year.

At the end of 2022, OpenAI publicly released ChatGPT 3.5, a chatbot derived from so-called large language models. It was widely seen as a breakthrough in artificial intelligence, but its rapid adoption led to questions about the accuracy of its answers.

The chatbot also became popular among students, which compelled teachers to grapple with how to ensure their assignments weren’t being completed by ChatGPT.

Issues of authenticity have arisen in other areas as well. In November 2023, a track described as the “last Beatles song” was released. “Now and Then” is a compilation of music originally written and performed by John Lennon in the 1970s, with additional music recorded by the other band members in the 1990s. A machine learning algorithm was recently employed to separate Lennon’s vocals from his piano accompaniment, and this allowed a final version to be released.

But is it an authentic “Beatles” song? Not everyone is convinced.

Advances in technology have also allowed the manipulation of audio and video recordings. Referred to as “deepfakes,” such transformations can make it appear that a celebrity or a politician said something that they did not – a troubling prospect as the U.S. heads into what is sure to be a contentious 2024 election season.

Writing for The Conversation in May 2023, education scholar Victor R. Lee explored the AI-fueled authenticity crisis.

Our judgments of authenticity are knee-jerk, he explained, honed over years of experience. Sure, occasionally we’re fooled, but our antennae are generally reliable. Generative AI short-circuits this cognitive framework.

“That’s because back when it took a lot of time to produce original new content, there was a general assumption … that it only could have been made by skilled individuals putting in a lot of effort and acting with the best of intentions,” he wrote.

“These are not safe assumptions anymore,” he added. “If it looks like a duck, walks like a duck and quacks like a duck, everyone will need to consider that it may not have actually hatched from an egg.”

Though there seems to be a general understanding that human minds and human hands must play some role in creating something authentic or being authentic, authenticity has always been a difficult concept to define.

So it’s somewhat fitting that as our collective handle on reality has become ever more tenuous, an elusive word for an abstract ideal is Merriam-Webster’s word of the year.The Conversation

Roger J. Kreuz, Associate Dean and Professor of Psychology, University of Memphis

This article is republished from The Conversation under a Creative Commons license.

Sunday, November 5, 2023

AI won’t be replacing your priest, minister, rabbi or imam any time soon

 

An android called ‘Kannon Mindar,’ which preaches Buddhist sermons. Richard Atrero de Guzman/NurPhoto via Getty Images

Early in the summer of 2023, robots projected on a screen delivered sermons to about 300 congregants at St. Paul’s Church in Bavaria, Germany. Created by ChatGPT and Jonas Simmerlein, a theologian and philosopher from the University of Vienna, the experimental church service drew immense interest.

The deadpan sermon delivery prompted many to doubt whether AI can really displace priests and pastoral instruction. At the end of the service, an attendee remarked, “There was no heart and no soul.”

But the growing use of AI may prompt more churches to debut AI-generated worship services. A church in Austin, Texas, for example, has put a banner out advertising a service with an AI-generated sermon. The church worship will also include an AI-generated call to worship and pastoral prayer. Yet this use of AI has prompted concerns, as these technologies are believed to disrupt authentic human presence and leadership in religious life.

My research, alongside others in the interdisciplinary fields of digital religion and human-machine communication, illuminates what is missing in discussions of AI, which tend to be machine-centric and focused on extreme bright or dark outcomes.

It points to how religious leaders are still the ones influencing the latest technologies within their organizations. AI cannot simply displace humans, since storytelling and programming continue to be critical for its development and deployment.

Here are three ways in which machines will need a priest.

1. Clergy approve and affirm AI use

Given rapid changes in emerging technologies, priests have historically served as gatekeepers to endorse and invest in new digital applications. In 2015, in China, the adoption of Xian'er, the robot monk, was promoted as a pathway to spiritual engagement by the master priest of the Buddhist Longquan Temple in Beijing.

The priest rejected claims that religious AI was sacrilegious and described innovation in AI as spiritually compatible with religious values. He encouraged the incorporation of AI into religious practices to help believers gain spiritual insight and to elevate the temple’s outreach efforts in spreading Buddhist teachings.

Similarly, in 2019, the head priest of the Kodai-ji Buddhist temple in Kyoto, Japan, named an adult-size android “Kannon Mindar,” after the revered Goddess of Mercy.

This robotic deity, who can preach the Heart Sutra, a classic and popular Buddhist scripture, was intentionally built in partnership with Osaka University, with a cost of about US$1 million. The idea behind it was to stimulate public interest and connect religious seekers and practitioners with Buddhist teachings.

By naming and affirming AI use in religious life, religious leaders are acting as key influencers in the development and application of robots in spiritual practice.

2. Priests direct human-machine communication

Today, much of AI data operations remain invisible or opaque. Many adults do not recognize how much AI is already a part of our daily lives, for example in customer service chatbots and custom product recommendations.

But human decision making and judgment about technical processes, including providing feedback for reinforcement learning and interface design, is vital for the day-to-day operations of AI.

Consider the recent robotic initiatives at the Grand Mosque in Saudi Arabia. At this mosque, multilingual robots are being deployed for multiple purposes, including providing answers to questions related to ritual performances in 11 languages.

A man in red checked head scarf and flowing white shirt with a robot.
A robot at the Grand Mosque in Saudi Arabia’s holy city of Mecca. Fayez Nureldine / AFP via Getty images

Notably, while these robots stationed at the Grand Mosque can recite the Holy Quran, they also provide visitors with connections to local imams. Their touch-screen interfaces are equipped with bar codes, allowing users to learn more about the weekly schedules of mosque staff, including clerics who lead Friday sermons. In addition, these robots can connect visitors with Islamic scholars via video interactions to answer their queries around the clock.

What this shows is that while robots can serve as valuable sources of religious knowledge, the strategic channeling of inquiries back to established religious leaders is reinforcing the credibility of priestly authority.

3. Religious leaders can create and share ethical guidelines

Clergy are trying to raise awareness of AI’s potential for human flourishing and well-being. For example, in recent years, Pope Francis has been vocal in addressing the potential benefits and disruptive dangers of the new AI technologies.

The Vatican has hosted technology industry leaders and called for ethical guidelines to “safeguard the good of the human family” and maintain “vigilance against technology misuse.” The ethical use of AI for religion includes a concern for human bias in programming, which can result in inaccuracies and unsafe outcomes.

In June 2023, the Vatican’s culture and education body, in partnership with Santa Clara University, released a 140-page AI ethics handbook for technology organizations. The handbook stressed the importance of embedding moral ideals in the development of AI, including respect for human dignity and rights in data privacy, machine learning and facial recognition technologies.

By creating and sharing ethical guidelines on AI, religious leaders can speak to future AI development from its inception, to guide design and consumer implementation toward cherished values.

In sum, while religious leaders appear to be undervalued in AI development and discourse, I argue that it is important to recognize the ways in which clergy are contributing to skillful communication involving AI technologies. In the process, they are co-constructing the conversations that chatbots such as the one at the church in Bavaria are having with congregants.The Conversation

Pauline Hope Cheong, Professor of Human Communication and Communication Technologies, Arizona State University

This article is republished from The Conversation under a Creative Commons license.

Monday, October 23, 2023

A Glimpse into Township Life: Stories and Experiences of the Local Residents



 

 

As I walked through the streets of a South African township, I was overwhelmed by the sights, sounds, and smells that surrounded me. The bustling streets were filled with people of all ages, from young children playing games to elderly men and women chatting on street corners.

The brightly colored houses stood side by side, with corrugated metal roofs glinting in the hot sun. Laundry hung from lines strung across the streets, adding splashes of vibrant color to the already bright scene.

As I walked, I heard music coming from several different sources. The rhythmic beat of African drums echoed through the air, while gospel music floated out of several churches. Children were singing and dancing, and people laughed and chatted in a range of languages.

But as much as I was enjoying the vibrant energy of the township, I couldn't help but feel a sense of sadness as well. The poverty that plagued the area was obvious, with many of the houses appearing run down and in need of repair. The streets were littered with trash, and the smell of sewage was strong in some areas.

But despite these hardships, the people of the township were welcoming and friendly. I was invited into several homes and offered food and drink by those who had very little to give.

As I left the township, I felt both humbled and inspired. The resilience and strength of the people who lived there was truly remarkable, and I felt grateful to have had the opportunity to experience their community.