Constructive News Algorithm

The Constructive News Algorithm is an innovative machine learning tool that uses language analysis to identify constructive elements and biases in news coverage. Designed to complement the editorial process, the Constructive News Algorithm helps newsrooms evaluate the “constructiveness” of their coverage.

Get involved

Contact Peter Damgaard Kristensen, COO, Constructive Institute

About the project

How it works

In the newsroom

FAQ

The Constructive News Mirror

Get weekly editorial reports that show how constructive your reporting has been. We offer a new platform to measure journalistic quality, and the content that matters to your audience.

Measure what’s important

About the project

A growing number of newsrooms around the world are adopting a constructive approach to journalism. To facilitate this shift, we’ve created an innovative tool to make it easier to identify constructive stories. The Constructive News Algorithm, developed with support from the European Broadcasting Union (EBU) and the Google News Initiative, classifies whether a news article has a constructive narrative or not based on whether its elements align with our standards for constructive journalism: facilitating solutions, nuance and democratic conversation.

The algorithm is fast and consistent, with an 88% accuracy rate compared with our team’s specially trained news raters. It filters articles based on their content — not on clicks, likes or other viral metrics — ensuring it remains a measurable, objective marker for news quality. Our goal is to promote good journalism that helps audiences stay engaged with their communities and the world around them, rather than push them away from the news — and each other.

The Constructive News Algorithm is already being used by news organizations across Europe. And it’s adaptable: News editors can use the tool to streamline their story selection and audit their coverage, giving them a jumping-off point to invest more in constructive angles. Search engines can use it to provide better results, while social media companies can curate the news on their platforms and news consumers can filter for higher quality information using in-browser applications.


How the algorithm works

Creating the Dataset

We use a method called supervised machine learning, where the algorithm learns to copy human decisions, to identify constructive news — meaning real people’s perspectives are at the center of the algorithm’s sorting process. We first compiled a set of more than 3,000 news stories from 28 large media organizations across Europe, including both web and print articles.

Constructive Institute Fellows then read the stories and gave them a 1-5 “constructiveness” score, relying on their intuition as journalists trained in constructive journalism rather than a set of objective criteria. We considered stories that scored a 4 or 5 to be constructive, while stories that scored a 1, 2 or 3 were considered not constructive.

Training the Algorithm

The Constructive News Algorithm is based on a BERT model, a state-of-the-art language model developed by Google that is trained using text from books, articles and websites. That means we’re using a language model that already understands human language, which we can then fine-tune for our goal: identifying constructive news.

We split our dataset into a training set (80% of articles) and a testing set (20% of articles). We fed all of the training set articles — including their “constructive” or “not constructive” labels — to the BERT algorithm, which identified  patterns, such as specific words and sentences, that these types of articles have in common. The algorithm is then updated to adjust the importance of these elements, allowing us to make the best possible distinction between constructive and non-constructive articles.

To ensure the algorithm can perform accurately on data it hasn’t seen, we then used the 20% of articles saved for the testing set to compare the overlap between the model and our experts’ evaluations of constructive news. Here we achieve 88% accuracy, which is just as good as the agreement between two of our Fellows reading the same article.

Avoiding Bias: Filtering Out Names

Potential biases can arise when using machine learning to train algorithms, and we took care to address them. Given that the narrative surrounding political figures, companies or celebrities can change rapidly, this is especially important when working with news stories.

For example, if our data collection period overlapped with a major political scandal, the values attributed to the disgraced politician’s name could be negatively affected — leading the algorithm to associate their name with non-constructiveness. To avoid these biases, we swapped all named entities in news stories with neutral substitutes, ensuring the algorithm focuses on the content and narrative of the story, rather than the people, organizations or locations involved.

Checking the Algorithm’s Work

The model could still have other biases, so we use state-of-the-art methods from the field of explainable AI to provide simple explanations for the elements the algorithm bases its decisions on. If the AI makes a mistake or exhibits biased behavior, our developers can identify and fix the model. This is much more transparent than traditional “black box” AI implementations.

These methods are essential for us at the Constructive Institute because they help us understand and trust the decisions made by the algorithm — and ensure that the model accurately and consistently identifies constructive journalism.


Founder and CEO Ulrik Haagerup on the impact of constructive journalism.

In the newsroom: European Broadcasting Union

The European Broadcasting Union — a coalition of public service media organizations in 56 countries — uses the Constructive News Algorithm for its A European Perspective project, which allows member outlets to share news stories from around Europe. Local editors curate stories from the stream of 2,000+ articles every day using subject tags — and now, they can filter for constructive stories identified by the algorithm.

The Constructive News Algorithm makes it easy for editors to quickly identify high quality, constructive stories — and in turn, fulfill the EBU’s public service mission to help audiences contextualize local issues and learn from the experiences of other countries. The first-of-its kind partnership underscores how newsrooms can use artificial intelligence tools to spread high-quality, trusted journalism and foster collaborations across borders.

The Constructive News Algorithm is an exciting addition to our digital news hub. The algorithm has been developed by the Constructive Institute with important input from our members. It provides an additional important insight into our journalism that inspires new ideas and seeing challenges from different perspectives. This is particularly important for public service media and its contribution to society.
Justyna Kurczabinska, who heads the EBU’s News Strategy and Transformation of Eurovision News


Frequently Asked Questions

Who are the project partners?

The Constructive Institute has been developing the News Algorithm since 2019, with support from the Google News Initiative and the European Broadcasting Union’s “A European Perspective” project, co-financed by the European Union’s Preparatory Action “European Media Platforms.”

What data was used?

We used articles from large Danish news organizations and members of the EBU’s “A European Perspective” project. We included both web and print news articles, and omitted obituaries, interviews, sports stories, etc., in order to optimize the dataset for our specific use case. This also means that the model does not handle these types of articles. In total, 28 news media contributed articles for the dataset. All articles were translated to English using DeepL.

What’s next for this project?

With the field of language models constantly evolving, we are quickly adapting our algorithm for many different use cases. We believe that the advancements in language technology will allow us to create a set of tools to help news editors to monitor their output — not only by the clicks they generate, but also the editorial values that shape responsible journalism.

How can the algorithm help newsrooms?

The Constructive News Algorithm is designed to complement and streamline journalists’ editorial processes by enabling them to audit their own coverage for constructive elements.

Who were the model’s human coders?

A group of primarily Danish Fellows and employees from the Constructive Institute did the initial coding of more than 3,000 news articles, which was used to train the Constructive News Algorithm. We’re always looking to expand our dataset of news stories as well as our coding team to ensure our algorithm takes into account a wide variety of perspectives.

How can I learn more or get access to the tool?

We are currently testing the Constructive News Algorithm with multiple newsrooms. If you’d like to get involved or have questions, please reach out to COO, Constructive Institute, Peter Damgaard Kristensen.