News Analysis Project

Methodology

What news outlets do you use? Do you ever notice potential biases in their content? The question of news media bias has always been a matter of debate. We have to ask ourselves what is considered acceptable when it involves the need to respect real word events while also being honest in reporting. In an internet age where opinions and ideas can be near violently debated, it is important to step back and study the things we consume in our media diets. The old phrase, “you are what you eat” plays into the idea that the media we consume should be “healthy” and we should at least properly understand what it is we take in on a daily basis.

Our project team was spurred on by the now infamous “Alternative Facts” quip taken from Kellyanne Conway’s CNN interview. It was like a bad punchline to all the accusations of “fake news” and instances of journalistic integrity coming into question within the early days of this Trump administration. We came together to think brainstorm where we would have to look to easily find patterns of bias in news content, and we settled on the aspects of delivery through language.

The News Analysis Project was developed in order to look deeper into these language patterns and discover what information we could gain. We wanted to start with a small sample size of news sources, so we settled for four of the more commonly known outlets: FOX, CNN, NPR, and BBC. This way we could take a single news topic and then find four separate viewpoints (one for each news source) to compare.

We understood that the articles, though reporting the same content, would differ in writing style and tone, which prompted our search for descriptive and emotional nouns, verbs, adverbs, and adjectives in the text. Our mark-up effort also included a system of “quote assessment” where we gathered loose demographic data from the individuals cited within. Our mark up of these individuals revolved around their gender and political leanings in attempt to see who is represented in the text and see if what was quoted had any connections to the language of the articles.

We selected five different political topics; the 2017 election results, Trump’s travel ban, Women's March, questions regarding Trump's taxes, and the US bombing of the Syrian regime. Each topic involved a different cultural concern that would call upon various buzzwords like war, gender, etc. that we could focus our studies around.

In order for the team to capture this information, we used Extensible Markup Language or XML code to develop copies of the articles inside of text files so we could generate method for later referencing. We gathered surface level article data (author, article title, date, links, etc.), as well as the entirety of article text before taking steps to tag the emotional language. Each team member developed their perception of what “emotional” constituted, then coming together we shared our perceptions settling for words that were either descriptive or pointed (via punctuation, formatted emphasizes, and placement in the sentence) in their usage. After marking all of the language, we took to finding quoted individuals and assigning them categorical traits based of political and gender demographics. With the markup complete, we then developed a variety of comparisons with different coding techniques to build visualizations for our findings.

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.