By Kemiso Wessie

Individuals and news organisations should experiment with AI, but in a carefully supervised way, Charlie Beckett, director of POLIS at the London School of Economics, told the round table of the African Journalistm Educators’ Network in Kigali.

Markus Winkler via Unsplash

Beckett said reputable organisations were adopting a cautious approach, and experts or teams should be appointed to understand AI’s impact on journalism. He stressed the importance of establishing guidelines to provide advice on both the positive and negative aspects of AI. He added that these guidelines should be adaptable, given the rapidly evolving nature of AI. 

Beckett pointed out that AI’s reliance on training data led to biases. 

 

AI tools were assistants to journalists rather than replacements, he said. He outlined various ways that AI can enhance journalism, such as summarising information and reformatting content for different platforms. He also mentioned the potential for AI to improve content search and selection, as demonstrated by a recent AI-powered search tool for Associated Press TV. 

Beckett urged journalism professionals to collaborate, share experiences, and optimise the use of AI tools. He emphasised that failing to prepare future journalists with AI skills could allow nefarious actors to misuse these technologies. 

Pheladi Sethusa of the Wits Centre for Journalism shared an example of introducing AI to journalism students. One significant experiment involved students asking ChatGPT to write news articles about themselves based on minimal information. The students noted that ChatGPT could generate false information about them which poses factual and ethical concerns.

Other issues that the students found that those who added more context received more “accurate” AI-generated responses as well as the false narratives that could be created with regenerated responses. Echoing Beckett’s earlier point on AI training biases, the students also found that when they prompted it to write an article, the generated article was more likely to describe the student as an entrepreneur or someone with tech-related achievements, showcasing a potential problem with how AI interprets success. Some students went further to research exactly how AI language models work to better understand its capabilities and limitations.

During the Q&A portion of the workshop, the discussion touched on the ownership of AI tools and the challenge of detecting AI-generated content in student work. Beckett acknowledged the complexities of AI ownership and expressed that open-source tools might not always be the best solution due to the rapidly evolving technology. He also highlighted the importance of news organisations building their AI databases. 

Sethusa discussed the difficulty in detecting AI-generated content, given the natural and coherent language AI produces. Sethusa also expressed cautious optimism about AI’s role in journalism education. She believed that fostering ethics and integrity could deter students from over-relying on AI, as they recognised the importance of creating authentic, human-authored content. 

While the use of AI technology is growing more popular, ethical concerns arise around original content and the integrity of journalism. Integrating it into journalism education raises further complexities.

The discussion highlighted that as AI technologies continue to evolve, preparing and educating students to navigate the resultant ethical and practical challenges has become increasingly vital.