Agenda-setting and Framing can’t be avoided

There are two subjects I want to discuss in this blog post: agenda setting and framing. For both I want to discuss if they are avoidable at all or not. Let’s start with agenda-setting: agenda setting is the idea that there is a strong positive correlation between the issues most often discussed by media outlets and the issues which the audience regard as the most important (McCombs & Shaw, 1972). A good example of agenda-setting is the annual ‘Zwarte Piet’ (Black Pete) controversy in the Netherlands. You saw it everywhere in the news, both before and during the arrival of Sinterklaas in the Netherlands. At the same time, it felt like every conversation you overheard was about the same subject as well. This seems to imply that people considered it to be very important. In fact, protests were held during the arrival of Sinterklaas. I can’t help but wonder if this would have happened at all without all the media attention. Of course, the media payed attention to the protests as well. A vicious circle is thus created: media attention leads to an event, which leads to more media attention, which leads to opinions polarizing more and more (which will probably be shown in the media through a survey done by Maurice de Hond), which then might lead to more events (like protests), which the media give attention to once again, etc. It creates a snowball-effect. The media has an influence on reality, which it then reports on, which again could have an influence on reality until the cycle dies out.

Can agenda-setting be avoided? Yes and no. I think it’s easier nowadays to seek out the news that interests you thanks to the internet and social media. In that regard, there might be a shift in the correlation between agenda-setting and the importance assigned to a subject. If you do not like the agenda that one online media outlet is pushing, you just visit another media outlet that does focus on the news you’re interested in. On the other hand, many ‘mainstream’ media outlets are still the first outlet people seek out online. And news received via social media cannot always be trusted until it is properly verified. In a sense, you’re always at the mercy of what media outlets want to report on. It cannot always be avoided.

The second subject I want to discuss is framing. Framing is often referred to as a persuasion technique employed by media outlets. They use frames, which are the different ways that media outlets can effectively and convincingly tell a story to the public (Van Gorp, 2006). The media can use words and images in such a way that they implicitly (or explicitly) try to ‘frame’ a subject in a certain way. And this can be related to agenda-setting: agenda-setting tells you what to think about, and framing can influence the way we think about the subject that we ‘should’ be thinking about. Van Gorp (2006) notes that framing also refers to the way the receivers of news media actively deal with the substance of the news they received. They have their own views and biases that color their interpretation of a news story. It means that framing is not just one-way traffic. Someone who is aware that the media utilizes framing could notice the usage of certain words in order to get a certain reaction out of the reader, which in turn could have an averse effect on that particular reader. In a sense, this makes framing a collaboration between the writer and the reader, though this might happen subconsciously.

An example of this is an article I read in a Dutch newspaper called ‘Algemeen Dagblad’ (AD). There was an interview with a man who had thrown a chair at a judge. This man had lost his daughter and parents-in-law in a car accident, and the father considered the sentence the judge gave to the perpetrator as being too low. After reading the article, I could empathize with the father and did feel anger at what I, at first, perceived as injustice. But then I read the article a second time, taking a closer look at the words used in it. Of course, you can’t blame the father for the anger that shows through his words, but inbetween the words of the father it felt like the writer of the article was trying to steer my feelings. Frames like ‘the judges threw one witness statement after the other off the table’ and ‘half of the Netherlands wondered how the hell this sentence was possible’. It felt like the writer wanted to frame this piece as a complaint about the law, while in reality the judges dismissed non-substantial evidence and basically held up the ‘innocent until proven otherwise’ maxim. The prosecutor couldn’t prove for a fact that the perpetrator was driving too fast, hence he received a lower sentence. The writer made it sound as if the judges were dismissive and did a lazy job. As the writer did the interview with the father as well, perhaps this framing happened subconciously based on empathy. I can imagine how this could seep in; in fact, it might be hard to avoid at all.

Framing happening subconsciously is what makes researching framing a challenge as well (Van Gorp, 2006). A researcher has to judge what is a frame and what not, and how to interpret a certain frame, which then relies on the researcher’s own frames. And one is not always conscious of their own frames/biases. In fact, even employing frames can in itself happen subconciously. Perhaps a writer is unaware of his/her biases, or simply considers an opinion to be so obvious that it is not given a second thought. For instance, when America performs an act of war, they ‘engage in defense’ (Chomsky, 2002). No matter what they’re doing – and usually they’re the ones invading other countries. But it’s a frame that few question and as such committed actions might not be seen as a bad thing.

The example above can also be seen as ‘reframing’, where a frame is reframed in order to change its cognitive effect. ‘Attacking’ becomes ‘engaging in defense’, which sounds like you need to defend yourself from something. This technique is also employed by some political websites. For instance, a right-leaning website could describe a socialist politician as a ‘leftist treehugger’. Whenever this politician talks about the environment, this frame is triggered for some, making it easy to instantly dismiss the politician without even listening to the story because they’re ‘leftist treehuggers’ (just check a comment section to know what I mean. Or don’t, so you won’t lose faith in humanity).

There is just so much to talk about when it comes to framing and agenda-setting that it is quite tough to put it all into one post. It’s just so broad and all-encompassing and such a big part of journalism, with so many clear examples, that it is tough to distill it to its essence. The main conclusion is: framing is part of journalism, you can’t avoid it. You’ll have to deal with it, and make your own choices. There are many nuances to consider, and since journalism is human work (until robots will replace us all, naturally), (subconscious) framing will probably always happen. As Goffman (1974) stated, we cannot fully comprehend the world and need frameworks to make sense of the world and to process new information. It’s therefore that I think it can never be avoided completely, but with some effort, with critical thinking aimed at both our own writing and the writing of others and with an awareness of our own biases, it might be possible to minimize it. That is, if the writer in question wants to at all. And often you just can’t escape it, and that’s when you have to make choices between one frame or the other.

References

Chomsky, N. (2002). Understanding Power. New York: The New Press.

Goffman, E. (1974). Frame analysis. New York: Free Press.

McCombs, Maxwell E.; Donald L. Shaw (1972). “The Agenda-Setting Function of Mass Media”. Public Opinion Quarterly 36 (2): 176

Van Gorp, B. (2006). Een Constructivistische Kijk Op Het Concept Framing. Tijdschrift voor Communicatiewetenschap 34 (3), pp. 246 – 256.

Seeing is Believing… or is it?

Readers rely on journalists to adequately do their jobs. There has to be a degree of trust between the reader and the journalist: you trust that the information presented to you has been thoroughly verified and checked by the professional. This is a necessity. After all, most people simply do not have the time to fact-check every bit of information they encounter. They read the newspaper in the morning, before work, in order to stay properly informed. As such, it helps if a journalist is able to effectively distillate the essence of a story. As interesting as a subject may be, not everyone is willing to read 20 pages on a story.

Visualization

Visualizations can be a helpful tool for telling your story. They have the ability to reveal patterns and trends that words cannot always convey. This is especially of invaluable importance to a data journalist. After all, numbers can be quite daunting at times. Especially in large quantities. Visualizations have the ability to translate numbers into comprehensible images that illuminate the story you want to tell. A good example of this is an election map made by Texty, that visualizes the results of the regional elections in the Ukraine.

 Ukrain_Map2

                                         Blue is pro-Russia, orange is pro-Europe.

Ukrain_MapThis image illuminates the divide that exists in this country to the reader. As can be seen, the western part of the country voted for the pro-Europe party, the eastern part voted for the pro-Russia party. An image such as this one really tells a story within itself, because it is an insightful image that illustrates the divide; a divide that, in this case, fuels the tensions in this country.

Visualizations gone wrong

However, images are not always this insightful (though at first it might appear that they are). An example is the following poll I encountered:

At first glance, it gives the impression that the confidence in Obama’s Economic Plan is steadily increasing. However, the x-axis tells you a different story: confidence in the plan is actually decreasing. The months are displayed in reverse chronological order, making it easy to be misled by this visual when taking a quick look at it. Somehow, I think that was the intention. Now follows another example of a misleading visual:

The first pie chart might look more visually attractive, but it also makes Item A look as big (or even smaller) than Item C. However, in the regular pie chart it can be seen that Item A is actually more than two times bigger than Item C.

These are good examples of what can go wrong when it is attempted to visualize data. Alberto Cairo, teacher of the fifth module of the Online Data Journalism Course, stated that good visualizations are beautiful, functional and insightful. While you can attract viewers by creating an appealing visual, what matters the most is what the visual is able to convey. It seems that the 3D pie chart was a case of style over substance, in that regard.

Avoiding visual errors

Alberto Cairo named the following four features as essential for a great visualization:

  1. Functional: The shape of the graphic is altered to fit the questions you want to answer with the visual.
  2. Beautiful: Attractiveness will make more readers want to read it.
  3. Insightful: The visual puts your data in context.
  4. Enlightening: The information revealed by the visualization shapes the perception of the reader.

You should always keep in mind that the visuals you create for a particular story are graphical representations of evidence. While a beautiful visualization might attract viewers, it is the substance that matters the most in the long run. It falls in line with my earlier blog posts, where I (hopefully) illustrated the importance of being accurate. For me, accurate data is the fundament on which you build your story. Both good visuals and good writing can help you tell your story more clearly, but the facts count the most. You can be a verbose writer, throwing eloquent and difficult words in left and right, but a well-written, inaccurate piece is still inaccurate. The same is true for visualizations. You have to choose the best graphic form that is functional, represents the evidence, and is able to answer the questions your story might bring up. For example, the 3D pie chart: it might look fancier, but it also misrepresents the evidence you want to present to your readers. It is therefore an inaccurate way to display your evidence. A normal pie chart might be less attractive visually, but at least it represents your evidence more precisely. It is easier to extract the meaning from it. And that will only make it easier for you to tell your story.

Finding Stories in Data

In 1998, a research paper was published which claimed that autism disorders were linked to MMR (measles, mumps and rubella) vaccines. The results of this research were widely published in Britain, leading to a sharp decline in uptake of the vaccine in that country (Elliman & Bedford, 2007). Though data journalism was not as prevalent in 1998 as it is now, a simple look at the methodology should have prevented at least some of the British media outlets from reporting this. The supposed relation between autism and vaccines was based on only 12 patients, too few to be representable. Next to that, almost all of the other research on the link between autism and vaccines found no link at all between the two.

Confirmed cases of measles rose in England and Wales as uptake of the vaccine decreased, even leading to the first measles-related death since 1992 (2006). It’s both an example of sources not being verified properly (as discussed in my previous blog post) and as an (early) example of data not being properly validated. It was not representative and other data denied the link between autism and vaccination. Later it turned out that the researcher who found the supposed link between autism and vaccination, had committed fraud by manipulating the numbers of his research.

In this blog post we will take a closer look at the validation of data sets. This will be illustrated by showing the proces of turning data into a story, and by illustrating the potential pitfalls of this proces. By illuminating this, I sincerely hope cases like the MMR vaccine controversy can be prevented.

Turning data into stories

According to Simon Rogers (via http://datajournalismcourse.net), journalism means reporting facts in such a way that people can better understand the issues that matter. Related to that, as a data journalist it is your role to bring data to life. Using numbers, you have to find the best possible way to tell the story of the data. In the case of requesting data, it would be helpful to have a list ready of questions you want to answer. You need to know what you want to get out of the data set, as it can only help you with the variables it’s made out of. Of course, a data set can inspire questions on its own as well. The fewer numbers you can use to tell the story, the better.

There are four key-roles involved (for either a team, duo or a lone wolf) when it comes to turning data into stories:

  1. Research
  2. Writing
  3. Development and coding
  4. Designing and visualising

The essential information you can get out of the data comes from asking the five W’s of journalism (Scanlan, 2003):

  • Who? – Finding the source of the data, and verifying how reliable it is. Your piece is also considered to be more reliable when you can be transparent about your source.
  • What? – The point you’re trying to get across, what you’re saying. Tell the story in a clear way that bridges the gap between the data and the reader.
  • When? – The date from which your data stems
  • Where? – Geolocation.
  • Why? – There is correlation, but this does not mean that there is causation.

According to Paul Bradshaw (in the third module of the Online Data Journalism Course), the usual starting point is either that you have a question that needs data, or a dataset that need questioning. He sees the compilation of data as that which defines either of them as data journalism. It’s no surprise then, that compiling is at the start of his inverted pyramid of data journalism (Bradshaw, 2011).

Inverted Pyramid Theory of Data Journalism

 

Compiling your data is the fundament, as everything is build upon it and you will return to your data at every other stage. Cleaning means removing any errors. Next is the context, which can be found by using the five W’s. Find a story that’s both newsworthy and easy to explain to someone who has never heard about it before. Then, with combining you can combine two or more data sets so you can have multiple sources for the same story. Finally, you can communicate your story – visualize the results, create a narrative, etc.

What can go wrong?

Based on these first two blog posts made so far, I think it’s fair to conclude that the fundament of all data journalism is the need to be accurate. Hermida (2012) stated something similar, as he named truth, facts and reality as the three values a (data) journalist must adhere to. However, having the intention and actually being accurate are two different things. Various elements of the proces of turning data into stories, as discussed in the previous paragraph, can lead to errors.

Being underprepared

Knowing what you’re dealing with saves you a lot of time, and preparing well by going through your data set thoroughly will only help you. Might seem obvious, but it is always good to remind procrastinators (such as myself) that putting time into the preparation is essential. Especially with deadlines nearing, which can lead to lazy journalism and sloppy verification.

Complicated story

A complicated story in itself is not an error per se, but it can turn into one if you can’t translate the data into understandable language for the layman unfamiliar with the subject.

Errors in the data set

No data is infallible. Nils Mulvad, for instance, discovered while approaching school leaders that the grades in the data released by the Danish ministry of education were miscalculated (Bradshaw, 2013). If a journalist would not check this properly, it could lead to published errors. Especially check your data again if it all seems too good to be true.

Errors in interpreting the data set

Especially in big data, it is easy to find correlations between several insanely different variables. An example of this is the positive correlation between ice cream sales and violent crimes and murder (Peters, 2013). Correlation does not necessarily equate causation. For lots of funny examples of this, check this site: http://www.tylervigen.com/

Unable to find structure in data

This is not necessarily the end of the world, but as Paul Bradshaw illustrated in the third module of the Online Data Journalism Course, it makes scraping a lot easier. The more structure, the more repetition, the easier it is to set up a scraper to do repetitive tasks you would have to do otherwise.

Unfamiliar with tools

Of course, in a team one can spread the work and thereby avoiding working with something they’re unfamiliar with. Nevertheless, it might be a good idea to become familiar with tools data journalists often use (for instance, Google Drive spreadsheet for scraping). Just in case.

Confirmation bias

Confirmation bias is the tendency to seek out or interpret information in a way that confirms someone’s beliefs or hypotheses (Miller, et al. 2009). An example of this is the Daily Mail still reporting on links between autism and MMR vaccines, even after the researcher of the original 1998 research paper retracted the paper and admitted it was false (Bloodworth, 2013). Though, instead of confirmation bias, one could speculate about an hidden agenda. There’s fear-mongering, but in the early 2000’s many British news outlets were also using the MMR vaccine controversy as a chance to attack the government (Goldacre, 2008). This bias can really creep up on you, so be mindful of it. At times, it’s good to question everything; even yourself. That’s a good note to end on.

References

Bloodworth, J. (2013). Is the Daily Mail killing children?. Retrieved from: http://leftfootforward.org/2013/04/is-the-daily-mail-killing-children/

Bradshaw, P. (2011). The inverted pyramid of data journalism. Retrieved from: http://onlinejournalismblog.com/2011/07/07/the-inverted-pyramid-of-data-journalism/

Bradshaw, P. (2013). Ethics in data journalism: accuracy. Retrieved from: http://onlinejournalismblog.com/2013/09/13/ethics-in-data-journalism-accuracy/

Elliman, D., & Bedford, H. (2007). MMR: where are we now?. Retrieved from: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2066086/

Goldacre, B. (2008). The media’s MMR hoax. Retrieved from: http://www.badscience.net/2008/08/the-medias-mmr-hoax/

Health Protection Agency Increase in measles cases in 2006, in England and Wales. CDR Wkly(Online), 2006; 16(12). Retrieved from: http://www.hpa.org.uk/cdr/archives/2006/cdr1206.pdf

Hermida, A. (2012). Tweets and truth: Journalism as a discipline of collaborative verification. Journalism Practice, 6(5-6), pp. 659-668.

Miller, F. P., Vandome, A., & McBrewster, J. (2009). Confirmation Bias. VDM Publishing.

Peters, J. (2013). When Ice Cream Sales Rise, So Do Homicides. Coincidence, or Will Your Next Cone Murder You. Retrieved from: http://www.slate.com/blogs/crime/2013/07/09/warm_weather_homicide_rates_when_ice_cream_sales_rise_homicides_rise_coincidence.html

Scanlan, C. (2003). Writing from the Top Down: Pros and Cons of the Inverted Pyramid. Retrieved from: http://www.poynter.org/how-tos/newsgathering-storytelling/chip-on-your-shoulder/12754/writing-from-the-top-down-pros-and-cons-of-the-inverted-pyramid/

Crowdsourced Information: Why Verification of Information is Essential

Reddit and the Boston Marathon bombing

On April 15, 2013, two bombs exploded at the annual Boston Marathon, killing 3 spectators and injuring 264 people in total. Pictures and videos of this incident found their way to the internet soon after it had happened. The FBI led the investigation into the Boston Marathon bombing, but there was another group that set up an investigation of their own: a section of Reddit (a user-generated news site) called ‘FindBostonBombers’. This section had been set up by a user of the site in order to gather information and perhaps even to discover the identities of the bombers. Previously, a Reddit user had created a comprehensive timeline of the Aurora shooting and discovered an up till then unnoticed social media account of the shooter (Chen, 2012). Many news outlets reported on this discovery. Perhaps based on this ‘success’, the users thought they could lend a helping hand. However, it turned out differently this time.

At this point, the FBI (as well as the ATF and the Boston law enforcement) were soliciting videos, pictures and other material that might help, making this the most crowdsourced terror investigation in American history (Abad-Santos, 2013). A Reddit user had set up a subreddit in order to mine through the large amount of released pictures. Though the user set up some rules, such as ‘do not post personal information’, it didn’t stop some of the users from creating their own theories and then acting upon these theories by posting personal information of their suspects. On April 18, grainy pictures of two suspects aired on live television (Shontell, 2013). Based on those pictures, and through miscommunication, rumours, speculation and sheer stupidity, members began to accuse Sunil Tripathi after a photo comparison and a supposed mention of his name on a police scanner. Many vile messages were send to his personal Facebook page, and his family was harassed as well. The FB group dedicated to finding him had to be closed. Why was there a FB group, set up by friends and family, dedicated to finding him? It was not because they thought he was a suspect that needed to be brought in. It was because Sunil Tripathi was missing at the time; it later turned out that he had committed suicide.

Using this case as an example, I want to illustrate how quickly misinformation can clutter discourse. And while the users might have had the best intentions, they were in over their heads. Crowdsourced information can be a great source for news, as the rapid growth of social media has made it easier to share pictures, videos and stories. And indeed, the news institutions more and more look to the public to help source new information (Silverman & Tsubaki, 2014). But without verifying thoroughly, misinformation will be spread.

Verification of rumours

Fear and uncertainty breed rumours. Especially in the aftermath of such an attack, with the suspects still on the loose. In times like those, verification is essential as misinformation might only increase the felt anxiety. Social media can both help, as journalists can verify info from sources, but also the other way around: for instance, users can verify a tweet with info or an article send out by a journalist. But social media can also delude the public through spread misinformation and lazy journalism. How can misinformation be countered?

Silverman & Tsubaki (2014), in the Verification Handbook, made a list of fundamentals for verifying information during a disaster (p. 11):

  • Put a plan and procedures in place before disasters and breaking news occurs.
  • Develop human sources.
  • Contact people, talk to them.
  • Be skeptical when something looks, sounds or seems too good to be true.
  • Consult credible sources.
  • Familiarize yourself with search and research methods, and new tools.
  • Communicate and work together with other professionals – verification is a team sport.
  • When trying to evaluate information – be it an image, tweet, video or other type of content – you must verify the source and the content.

This gives us a broad overview of the fundamentals one should keep in mind when verifying info during a disaster. The last bullet point is a recent maxim added in light of the increasing significance of social media. And since the example discussed above deals almost exclusively with social media, it would be madness not to include the four elements a journalist needs to check and confirm when it comes to pieces of information or content found via social media (Wardle, 2014):

1. Provenance: Is this the original piece of content?
2. Source: Who uploaded the content?
3. Date: When was the content created?
4. Location: Where was the content created?

Preventing a social media debacle

In the case of the ‘FindBostonBombers’-subreddit on Reddit, there were a couple of reasons for their investigation failing. Apart from not properly verifying their intel and not properly verifying the claims made by the user(s) accusing Sunil Tripathi (such as checking whether Sunil was mentioned on the police scanner at all, and whether this was in relation to the bombing), the community succumbed to confirmation bias. It wanted to be right, and wanted to be the first to solve the case. Reddit uses upvotes in order to get the most popular news to the frontpage, and when there’s a strong sense of community and being one front, it can lead to the formation of a hivemind of sorts. Many were upvoting because it fitted the narrative of this community outsmarting the established media. But next to that, there were also tweets like: “If Sunil Tripathi did indeed commit this #BostonBombing, Reddit has scored a significant, game-changing victory.” and “Journalism students take note: tonight, the best reporting was crowdsourced, digital and done by bystanders. #Watertown” (Madrigal, 2013). This seems to indicate that the community wanted to one-up journalists and the FBI, that they could do it better. ‘Journalism’ seems to be equated to ‘The Establishment’ in this instance, and getting a scoop is equal to a rebellious act. This seems to indicate a distrust of the media, something two ‘Redditors’ named as one of the reason to investigate the bombing on their own (Pickert and Sorenson, 2013).

Communities like this cannot be stopped, and I will not argue that they should be stopped. However, when you start an investigation like this, you also take on a responsibility. When you put someone’s personal information online, tagging someone as a potential suspect, you take a giant risk that could have a massive influence on the life of the person you’re accusing. It might have real-life consequences through arrests and harassment. To solve this, I would suggest that Reddit pays more attention to moderating these kinds of initiatives. And I believe they are capable of this. A good example of well-moderated subreddit is ‘AskScience’, where you either need to provide sources with your answers to the questions of users, or need to prove that you are an expert on a certain topic. Everything else that isn’t useful will be deleted. I think this might be the way to go for such a community: unfounded accusations and unverified info should either be debunked, removed or flagged as false. It might also be a good idea to have guidelines (such as ‘do not post personal information, go to the authorities’) and to post the verification fundamentals I shared above. Which hopefully will be used, as using those fundementals could have prevented all this. Or at the very least, it could have cut out a lot of noise distorting the facts.

References

Abad-Santos, A. (2013). Reddit and 4Chan Are on the Boston Bomber Case. Retrieved from: http://www.thewire.com/national/2013/04/reddit-and-4chan-are-boston-bomber-case/64312/

Che, B.X. (2012). How Reddit Scooped the Press on the Aurora Shootings. Retrieved from: http://bits.blogs.nytimes.com/2012/07/23/reddit-aurora-shooter-aff/

Madrigal, A.C. (2013). #BostonBombing: The Anatomy of a Misinformation Disaster. Retrieved from: http://www.theatlantic.com/technology/archive/2013/04/-bostonbombing-the-anatomy-of-a-misinformation-disaster/275155/

Pickert, K., & Sorenson, A. (2013). Inside Reddit’s Hunt for the Boston Bombers. Retrieved from: http://nation.time.com/2013/04/23/inside-reddits-hunt-for-the-boston-bombers/

Shontell, A. (2013). What It’s Like When Reddit Wrongly Accuses Your Loved One Of Murder. Retrieved from: http://www.businessinsider.com/reddit-falsely-accuses-sunil-tripathi-of-boston-bombing-2013-7

Silverman, C., & Tsubaki, R. (2014). When Emergency News Breaks. In Verification Handbook (pp. 7-12).

Wardle, C. (2014). Verifying User-Generated Concent. In Verification Handbook (pp. 25-34).