Election Observation and Democracy Support (EODS) is the capacity building project for EU Election Observation. It is funded by the European Commission.
The digital toolkit is regularly updated, so keep visiting the toolkit regularly to stay informed and make the most of its resources.
If you have any further questions or enquiries, please contact the EODS Team at
office@eods.eu
Welcome to the Digital Toolkit, a web-based resource designed to support election experts and enhance social media monitoring and digital ecosystem observation during EU Election Missions.The Digital Toolkit is designed to support EU Election Mission Core Team Members in particular Social Media Analyst (SMA) and Media Analyst (MA) through:
A user-friendly, interactive website designed to serve as a comprehensive central hub for a wide array of resources, including informative documents, multimedia content, and up-to-date materials.
Step-by-step instructions for setting up Social Media Monitoring (SMM) projects, ensuring a harmonized and consistent approach across missions.
A repository of EODS-drafted documents, with links to EU divisions and major election organizations, promoting knowledge-sharing & best practices.
This Digital Toolkit provides an overarching view of the European Union’s approach during Election Missions, enhancing transparency and fostering collaboration with other organisation
The EU Social Media Analyst employs a range of complementary approaches and techniques to gather information on online election-related content and campaigns. The primary methods for observing, analyzing, and assessing this subject include the following:
involves examining and assessing the relevant legal and regulatory frameworks against international standards and best practices, as well as evaluating compliance with the law (e.g., online campaigns, the role of the EMB)
Learn Moreare crucial to obtain insights into the online ecosystem and campaigns. Context regarding the online campaign is gathered through interviews with relevant stakeholders
Learn Moreresulting from the EU EOM’s social media monitoring efforts, collected according to the methodological framework for each mission
Learn Moreis a complementary element, under the guidance of the SMA, when observing general elections where region-specific campaigns are predominant.
Learn MoreThe time between selection notification and core team (CT) deployment can vary across missions. This section outlines best practices for Social Media Analysts (SMA) following the selection as Core Team Member, including prior deployment and preparatory steps to set up social media monitoring projects.
If some of the assigned tasks are not clear, ask FPI, EEAS desk officers for the mission, the Deputy Chief Observer (DCO) and/or EODS for explanations.
such as the Handbook for EU Election Observation, the EU Compendium of International Standards for Elections, the EU EOM Practical Core Team Guidelines and the EU EOM Reporting manual. (Connect with a link)
the Ethical Guidelines and the Guidelines on Secure Communication.
Become familiar with past EU support to the country and available reports and documents. These may include previous EOM statements and reports, Election Expert Missions (EEM), Election Follow-Up Missions (EFM), Exploratory Missions (ExM) reports and observer manuals of previous EOMs in the country.
including lists of institutions contacted by these missions; materials could be supplied by the EEAS/FPI to the Deputy Chief Observer (DCO) and then disseminated to the CT.
to get familiar with the national media landscape, freedom of expression status and media legislation. For example, UNESCO, Article 19, Human Rights Watch, Amnesty International, Reporters Without Borders (RSF), Freedom House, Internews, International Media Support, Transparency International, The Global Network for defending and promoting freedom of expression (IFEX), International Fact-Checking Network (IFCN), Academia, etc.
to get familiar with national media landscape, freedom of expression status and media legislation. For example, media regulatory bodies, self-regulatory bodies, civil society organisations monitoring freedom of expression and media developments, as well as academic researchers and journalists. Conduct online research on recent media developments using key words.
as well as international organisations and national CSOs, which carried out media monitoring projects in the past and might conduct similar projects for the forthcoming elections.
and identify critical issues related to media legal framework, offline and online media landscape, freedom of expression.
and social media stakeholders (refer to the Guidelines)
In addition to posting and advertising of vacancies conducted by the SP, a good practice for hiring staff could be to ask SPs for the ToRs of social media analyst assistant and monitors and start to share them with any peers/colleagues/organisations that the SMA might have in the country,
EEAS provides the digital briefing pack during the pre-deployment briefing as well as the EODS online briefing package. These include briefing notes from EEAS and FPI, EODS Practical Core Team Guidelines, EU country documents and briefings, ExM reports and contacts, former EU EOM/EEM/EFM mission reports, legal and electoral aspects, the Administrative Arrangement with the host country, practical guide for EU visitors, EOM guidelines, Reporting Manual, templates, etc
Are there EODS guidelines for Media Analysts, F.?
Donec id elit non mi porta gravida at eget metus. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cras mattis Donec id elit non mi porta gravida at eget metus. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cras mattis Donec id elit non mi porta gravida at eget metus. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cras mattis Donec id elit non mi porta gravida at eget metus. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cras mattis Donec id elit non mi porta gravida at eget metus. Praesent commodo cursus magna, vel scelerisque nisl consectetur et. Cras mattis
TEST .In this area of assessment, the goal is to monitor how party and candidate accounts use social media during the election to carry out their campaign online.
The sample for this area of observation should be all social media posts per candidate and party in a defined timeframe. Although, some threshold to limit the scope may be required. In the cases when the number of candidates/parties and the total posts per per candidiate/party per week exceeds the capacity of the team, try to observe primarly the most relevant publications for all candidiates/parties or for each candidate/party (see Methodological Frameworks section for reference).
Given the data available to researchers, see the possible research areas:
Consider cross-referencing sections on “Information Manipulation” and “Derogatory Speech and Hateful Content” to understand if such harmful techniques are being used by official party or candidate accounts.
First, you will need to come up with a general list of all candidates and parties that you would like to monitor. Consult lists of registered contestants from the electoral commission.
There is a high chance that you will need to limit your list to the top candidates and parties due to time constraints. For example, you may want to pick parties that maintain a large share of representation in the current parliament above a specific threshold. At the same time, you may want to consider if certain parties or candidates have shown a history of harmful online behaviour. Any decision on threshold should be clearly explained to report readers.
Second, you will need to define a timeframe relevant to the online campaign period. There may or may not be an official campaign period. It may also be worth monitoring after election-day to identify false claims regarding the election’s credibility, and results acceptance.
Third, depending on your data collection tool, you may need to find the exact social media handle per party and candidate. Determine if this step is necessary after identifying which data collection tool(s) you will use. Note this can be a time-consuming process, especially if you are looking at many actors.
Using the lists of actors from Step 1, you can start to gather all the social media posts from the selected candidates and party accounts. See Methodological Frameworks section for guidance on how to collect data. Consider weekly data collection intervals so team members can label posts simultaneously to collection if relevant per step 3.
| Research question | Means of analysis | |
| Question 1 (Easiest) |
Which party or candidate used social media the most for their online campaigning? | Count the total number of posts per candidate and party. |
| Question 2 | Which social media platform did parties or platforms use the most during the campaign? | Count the total number of posts per candidate and party per social media platform. |
| Question 3 | Which party or candidate did users engage with most on social media platforms? | Count the total number of likes and shares per candidate and party (potentially by platform too). |
| Question 4 | Did parties or candidates use negative or positive campaigning techniques? | Label posts as “negative”, “positive” or “neutral” and count the total posts. |
| Question 5 | Which topics did parties and candidates discuss during the campaign? | Label posts by topic and count the total posts. |
| Question 6 (Hardest) |
Did parties or candidates make false claims about the election or spread Derogatory Speech and Hateful Content using their official accounts? | Label posts by the respective category and count total posts. |
Some online tools for monitoring, collecting and analysing data offer the possibility of tagging or labelling social media publications according to previously defined categories. If so, that may be useful for the analysis. The categorization of the social media posts by candidiates and parties is described in the Methodological Frameworks section. First, you should create a list of potential topics. Sometimes there are already useful websites for a given country that include the top political issues. Based on this list and qualitative information, you should limit the number of topics to 10 or so. Using your final list, you can develop a codebook with definitions for each topic and examples. Then you can label each post per topic, and the final data can be summarised and counted to understand the top-level trend.
Some Social Media Listening tools include an internal engine to automatically perform a sentiment analysis on the data collected. What this algorithmic tools dois going through the text of the posts (and links) and assign them a positive, negative or neutral tone, in relation to a machine learning algorithm and a positive or negative connotation for certain words. Sentione, for example, offers such a sentiment analysis function, based on de PANAS - Positive and Negative Affect Schedule, developed by John R. Crawford and Julie D. Henry.
Sentiment analysis can be helpful for teams to get an overall view of the social media landscape and may provide leads to identify accounts that most favour negative campaigning. But the reliability of this automatic sentiment analysis is not perfect and will depend on the effective performance of the online tool used. Plus, the machine learning support for this type of algorithmic attribution of sentiment tends to be greater in the English language than in other languages, which may be a limitation in the election missions.
Finally, the positive or negative sentiment is attributed to the posts according to the words used in them and not for its positive or negative stance towards an individual or a group. This distinction is important because it means it does not necessarily identify Derogatory Speech and Hateful Content as negative. Yet, again, it can help identify actors and accounts that are prone to negative comments and posts.
For these reasons, automatic sentiment analysis by a social listening tool should be treated with caution and, if used, validated with a qualitative analysis. Nevertheless, in some cases it may be helpful to look at those results. In this aspect, each team should do its own assessment of the advantages and disadvantages of doing so.
For those that have programming skills, or have team members with programming skills, there are a number of tools for Python and R which allow you to carry out an automated sentiment analysis, like the TextBlob package for Python or the Tidy Text Mining resources for R. Note that such methods do not always work perfectly.
This area of assessment aims to understand how contestants and other stakeholders use political advertising on social media. However, lack of available data may severely limit the depth of analysis in this area.
First of all, assess if there are legal provisions for online political advertising and ask candidates, where possible, if they plan on buying online advertising, and if so, on which online platforms.
A further research area not covered here would be to understand the online political ad use by non-contestants. Most problematic content are usually not pushed by official candidates, so understanding non-contestants advertising is highly important. Such an approach would require analysts to search for candidates and parties as “keywords” rather than by official accounts. Then, posts campaigning for or against the candidate could be labelled and quantified.
Using such a keyword search approach is only possible using the Meta Ad Library API and not the Ad Library Report or Google’s Political advertising transparency report which only allows you to search by advertiser.
This step is important to decide if it is even possible to carry out analysis in this area of assessment.
For this area of assessment, you will be monitoring the advertisements made by official candidates and parties. Consider a threshold to limit your list if it is not possible to monitor all within the time period.
See Methodological Frameworks section for more information regarding the Meta Ad Library, Meta Ad Library Report, Meta Ad Library API and Google Political Ad Transparency report. Search for candidates and parties as “keywords” rather than by official accounts. Label and quantify posts campaigning for and against the candidate.
Non-programming (Facebook/Instagram and Google/YouTube)
Manually download data per advertiser using Facebook’s Ad Library report or Google’s Political Ad Transparency Report. Consider which intervals are relevant given time bucketing issues for some tools.
Programming Advanced Method (Facebook/Instagram)
If it is possible to use the Facebook Ad Library API, your analysis can go into more depth.
Take into account the limitations of the data collection from ad repositories:
| Question 1 (Easier) | What was the total spend per party or candidate in the monitoring period? |
| Question 2 | Were any advertisements posted during electoral silencing periods? |
| Question 3 | Which demographics and regions were targeted by each candidate/party? |
| Question 4 (More complex) |
What messaging did different candidates and parties use in their political ads? |
Generate summary statistics based on the data collected per each question. A challenge will be that data is sometimes available only within timeframes predetermined by the API, which can create difficulties to have statistics corresponding to the desirable timeframe for election analysis.
First, draft a list of different topics or messages. Filter through a few random subsamples of ads per party or candidate to generate this list. Then carry out manual coding and add up the summary statistics. If there are too many ads to monitor, decide to only label posts above a certain threshold. For example, choose a certain number of ads per party of candidate, potentially those ads with the most reach.
Again, lack of available data may be a significatn hurdle for the comprehensive assessment of Political Advertising on social media. Therefore, the social media analyst should work as much as possible with the tools available and consult with local stakeholders to "fill in the gaps" and guide the monitoring process. Also, bear in mind that, Meta and Google do not exhaust the social media advertising landscape. Other platforms, like Telegram or TikTok, also allow ads, but do not provide a dashbord for the respective accountability. Telegram does not have such a dashboard and TikTok has an Ad Library but claims not allowing political ads (altough some political actors have used TikTok influencers to convey their messages). Those approaches cannot be researched in a consistent and objetive manner, but should nevertheless be under the radar of the social media analyst.
Information manipulation can consist of different and integrated tactics, techniques and procedures (TTPs, e.g. coordinated or lone inauthentic actors, click farms, trolls, bots and botnets, cyborgs, other forms of manufactured amplification, etc.). Information manipulation is multifaceted and often created in a coordinated manner across different online platforms. It could be observed not only during the campaign, but also on the election day and prior to/during the announcement of results.
Information manipulation has the potential to exploit existing societal polarisation, suppress independent and critical voices, generate confusion among voters, discredit fact-based information and undermine candidates, institutions, and vulnerable groups. Artificially generated content and dissemination may distort the genuineness of public discourse by creating an impression of widespread grassroots support for or opposition to a policy/issue or individual/group
One should distinguish between the information manipulation that is created and shared within a small like-minded group most likely having a limited impact on the electoral process, and the one that has a potential to harm the electoral process.
As identifying manipulated information is time consuming and difficult, the best approach is to first narrow down the content you must examine by:
A mixture of approach 1, 2 and 3 is recommended in an ongoing, iterative process. Identifying information manipulation requires some trial and error with different approaches to see which of them yields the best result. After content has been identified via one of the above approaches, one should then seek to prioritise content for deeper examination by asking “does it matter?”.
Sometimes a post identified as information manipulation may have reached a limited number of people, in realtio to others with higher reach. Those should have priority. On the other hand, sometimes a large number of smaller reaching posts may also have influence on the election. Combining the 3 approaches above is the best way to try to account for all those possibilities.
Identifying online information manipulation may feel like searching for something in the dark at first. None of the available tools alone will be sufficient for a fact-based assessment on the presence of bots, trolls, fake accounts, and other manipulation techniques in online campaigns. Therefore, you often need to focus on some cases and conduct a full analysis of data retrieved via manual verification and OSINT tools to identify information manipulation techniques.
Reaching out to local social media analysts or OSINT experts who already work on the topic is highly recommended. They may already have lists of seed accounts for you to monitor or recommended keywords or places to begin your search.
Look for trending and viral content, which may be spreading unexpectedly fast. How much engagement has the content got in comparison to a typical post of this nature? If the tool or tools you are using have some metric to assess the overperformance of a post, try to use that metric. Otherwise, unusually high reach or total interactions may also be an indicator.
If your team has programming capacity, you can use PyCoornet or CooRnet to identify coordinated link sharing behaviour on a large sample of URLs. This is extremely useful to quickly identify any network behaviour for a more comprehensive picture. But note that a qualitative check is always needed when using these tools because coordinated activity can also be used for positive purposes as well, as, for example, when an electoral management body makes an important announcement about the election that is widely shared.
Develop a list of ctors known for spreading manipulated information. You can compose this lists following an iterative search on key divisive keywords in order to identify the actori that repeatadly and with grater reach address those key divisive issues. It may also be useful to look in the comments sections of known actors to identify actors that may be consistently spreading manipulated information.
The challenge with this approach is to have a transparent and consistent method to identify such actors known for spreading information manipulation. If that is not the case, the neutrality of the research may come to be questioned. Being transparent about the method used to identify those actors and about why they were identified is paramount to prevent that.
Look for hashtags or keywords used to push manipulated content.This approach leverages the fact that to spread information, the content creator must enable it to be found. Knowing the often “coded” vocabulary of troublesome movements or ideologies is useful here. For example, the use of polarising or divisive terms or language tends to be associated with manipulated content. If you follow those terms or language, you will be closer to identify that kind of manipulated content.
If content is assessed to have received more engagement than expected, has been shared in diverse environments (i.e., groups), and has spread to different outlets (i.e., platforms), then it can have impact on opinions and thus elections. This may be a useful threshold to consider when narrowing down your sample of posts to analyse from the previous section.
Analysts may then choose to investigate that content further using OSInt practices. They may also decide to code content for specific narratives to draw top-level conclusions (guidance on how to debelop and use a codebook are also described in the Methodological Frameworks section). It may be possible to draw some conclusions about the type of actors spreading such content (e.g., gossip pages, groups favouring a certain party); however, analysts should ensure their data or investigation is conclusive before quoting it in the reports. Remember that yoir analysis should always result from consistent, objective and transparent criteria.
Investigate suspicious accounts. Is the content being shared via suspicious groups? Checking the group’s history can also indicate if the group may have been setup only for spreading disinformation. Changes in admins, group name, creation date, and unusual follow/member demographics should all be checked.
Programming helps here, but some of this can be gathered manually, especially unusual follow/member demographics. Account or group names will often, although not systematically, indicate their character, e.g. their political leaning. Also, examine groups/accounts for shared administrators, followers or members to determine if they form a community.
While researching for information manipulation Datajouralism.com guides on Investigating Social Media Accounts and Spotting bots, cyborgs and inauthentic activity may prove useful.
Verifying specific posts may be necessary in your monitoring, although it will be time consuming to carry out a thoughtful, solid investigation into many posts. Consider testing average time required for your team to investigate a post to determine a realistic sample size for this area of assessment. See some useful resources:
Consider many people have actually engaged with the content. If you find that a post is sharing false or misleading information, it would be important to know how many people that information may have reached. If it only reached 1 person, the potential harm of such content is less than if it had reached 1 million people. You can check metrics about the posts - namely reach and/or interactions - to try to assess how much attention that post has garnered. This information is also available in the .csv files you are collecting as well.
Understanding common narratives and tactical shifts is highly interesting. However, this method will require manual coding of a selected sample of false or misleading posts. If you are already planning on manually coding a selection of posts as false or misleading, this would be an easy and highly useful element to add to your analysis.
Create a list of false or misleading narratives based on qualitative research and a first review of false posts. From this, make a coding guide with clear definitions and examples for each category. Label posts accordingly and add more narratives to your master list as they come up. You may want to include specific categories for false information that specifically target electoral integrity. It may be useful to consider CT members such as Political Analyst and Election Analyst and social media companies’ policies when defining your categories.
For this, it may be useful to take some inspiration from the election policies set up by social media platforms to moderate content online, like these examples:
Once the posts are labelled, analyse top level summary statistics to understand which narratives were most important. Furthermore, how did the narrative shift over the campaigning period? Are there any feedback loops between niche accounts spreading such narratives and mainstream actors?
In order to identify and analyse information manipulation, try to follow these practial tips for guidance:
The identification of instances and volume of Derogatory Speech and/or Hateful Content (see Glossary) during the election is one of the areas os assessment for the social media monitoring project. The research on this subject can be carried out in a way similar to the Information manipulation section, with 3 different approaches:
Based on political context and team capacity, choose the most appropriate method. You will already likely be monitoring candidates and parties, which will only require you to add an additional layer of analysis. Although, if feasible, monitoring hate communities perpetuating hate can provide an early warning for new hashtags or terms. In many countries, these communities exist on niche platforms, which are more difficult to monitor. Consider if the benefits of monitoring niche platforms outweigh the required manual work of shooting in the dark to identify perpetrators. Some factors to consider are if they could provide a worthwhile early warning for your monitoring or influence a significant portion of the population.
As usual, a combination of the 3 approaches may provide the best results.
| Monitoring hateful keywords | Monitoring candidates or parties for hate | Monitoring hate communities | Monitoring vulnerable targets of hate | |
| How | Keyword search to identify posts which can be qualitatively and quantitatively analysed | Monitoring of posts by official candidate and party accounts followed by qualitative and quantitative analysis. | First identify actors in the hate community. Then, monitor those actors on an ongoing basis collecting posts and carrying out analysis. | First identify 2-3 worthwhile targets. Then, monitor mentions of individuals using a keyword search and/or comments on that person’s social media accounts. |
| Challenge | This approach only works well for text-based platforms, like Facebook or Twitter. It will be less efficient to capture hate in images or videos. | If you conclude an actual party or candidate post constitutes “hate”, this is highly relevant for the election observation. | The challenge here is first identifying the “seed account to monitor”. Consider pre-monitoring using searches to create your list of relevant accounts. This may be time consuming, but tends to be fruitful. | Depending on the tool used to collect the data and on the social media platform, comments may not be available for collection. |
The ability to fully implement these approaches to monitoring Derogatory Speech and/or Hareful Content will be dependent on the tools used to collect and analise data. The Frameworks suggested in the corresponding section are designed to allow the social media analyst to pinpoint the content that can be derogatory or hateful.
Step 1: Define relevant keywords and actors. Your Derogatory Speech and Hateful Content lexicon should be made up of inflammatory language, particularly terms that could be used to target vulnerable populations. This list should be as widely encompassing as possible to gather many posts that can be analysed in a more refined way later. This process may be carried out through brainstorming with the local team and online research. But take into consideration that some SMM teams may be uncomfortable discussing hate speech and the associated vocabulary.
You may have an initial list, which may be searched to identify further language commonly used alongside the keywords that have been already identified (in a snowball strategy, as referred in the Methodological Frameworks section). As the mission goes on, keywords and hashtags will most probably evolve during the course of the mission to reflect the different stages of the electoral preparations (from voter registration to tabulation and announcement of results), and to reflect the political events taking place in the country (rallies, speeches, incidents, arrests, protests, etc.).
The social media analyst will consequently run searches with those hashtags and keywords with the tools he/she uses, selecting the timespan, accounts, their geographical relevance, etc. Social media listening tools like SentiOne offer the possibility to “save searches” or create “projects” based on the selected keywords and hashtags.
Often Derogatory Speech and Hateful Content lexicons already exist for a given country. A Google search will be the best place to start. You may also find it helpful to get in touch with researchers, civil society, or academics who have developed these lists or worked on this kind of monitoring before
Step 2: Data collection. While it is possible to use a keyword-based lexicon approach for text-based platforms such as Facebook and Twitter, for YouTube, Instagram or TikTok, any keyword-based search will produce weaker results because the content is video or image, not text.
On the other hand, it is likely that hateful or derogatory posts will be deleted by social media platforms, so it’s important to take screenshots of images and save post data in real time or use some archiving tool. Likewise, most data extracted from social listening tools will carry all the information public at the date of extraction, but not the images. If an image is relevant for the analysis of Derogatory Speech and Hateful Content, taking screenshots would be advisable.
Step 3: Data analysis. How do you analyse your collected social media posts that are using the inflammatory or derogatory terms from your lexicon? If you're collecting data from social media, you can categorize certain pieces of content as derogatory or hateful and sub-categorize them according to further criteria:
This section presents a method of monitoring perpetrators of hate: official parties or candidate accounts and hate communities. Monitoring official parties or candidate accounts is significantly easier because the actors to monitor are relatively set. In some countries, they may be the most obvious perpetrators of Derogatory Speech and Hateful Content. Hate communities are more difficult to monitor because they are constantly changing and harder to identify. The presented method may be applied to both.
Step 1: Define your actors to monitor
For parties and candidates, define a list of official accountss to monitor. Consider targeting your monitoring of Derogatory Speech and Hateful Content to those candidates and parties who would be most likely perpetrators to free up more time for other aspects of the monitoring. For hate communities, identifying hate actors may be more difficult. One solution is to track users engagaging with the top perpretars and analyse to which hate communities they adhere.
Step 2: Data collection
Again, it is likely that posts will be deleted by social media platforms, so it’s important to take screenshots of images and save post data in real time. Fort parties and candidates, it is likely you will already be collecting social media posts from official candidates and party accounts. If this is the case, you can monitor those same posts for Derogatory Speech and Hateful Content. For hate communities, you will have to engage in a dynamic process of adding more and more actors to your list as you find them.
Step 3: Data analysis
Label your collected data for Derogatory Speech and Hateful Content by integrating this into your categories and sub-categories. You may also want to label for specific narratives or the target of hate. It is also worthwhile to track the total interactions and reach of specific posts to understand their impact. Consider referring to social media platform’s hate policies for examples to include in your Derogatory Speech and Hateful Content codebook. Because of being manual, this method is the most time consuming but may be more precise than advanced tools. It will also allow you to label for specific nuances. If you or ypour team have programming capacities, it is possible to use code to identify Derogatory Speech and Hateful Content on a large-scale basis. If that is the case, consider using the following tools with Python:
At the moment, there are no social listening tools that can identify Derogatory Speech and Hateful Content and generate immediate data visualisations. Such tools can run preliminary sentiment analysis that can indicate the tone of the conversation. However they are not 100% accurate and reliable, as their analysis depends on the effective performance of the online tool. The built-in sentiment analysis functions of the ready-made tools or software does not allow any of them to accurately detect derogatory speech and hateful content. Such tools might flag potentially problematic posts, but they all would require a manual review. Plus, the machine learning support for this type of algorithmic attribution of sentiment tends to be more proficient in the English language than in other languages, which may be a limitation in the election missions.
This method is probably easier given the target is confirmed and it does not require the full development of a Derogatory Speech and Hateful Content lexicon. However, it is the narrowest approach, and does not paint a comprehensive picture of Derogatory Speech and Hateful Content in the general discourse. It may however still show biases in the online discourse if vulnerable candidates are disproportionately targeted.
Step 1: Identify some potentially vulnerable candidates.
Consider any female and/or minority candidates who would be particularly vulnerable to Derogatory Speech and Hateful Content during an election.
Step 2: Data collection – collect posts about the candidate and/or comments if possible
Posts about the candidate would be collected via a keyword search for that candidate’s name along with any relevant hashtags. If possible, collecting comments on that candidate’s account would be highly relevant as well. Some social listening tools, like SentiOne, can collect comments on Facebbook, as well as replies and mentions on Twitter and comments on YouTube.
Step 3: Data analysis
A sample of posts and/or comments about a potentially vulnerable candidate can be labelled for hate or not. More nuanced categories may be applicable as well, particularly the types of hate or attributes that perpetrators may focus on. See Democracy Reporting International’s guide on monitoring gender-based harassment and bias online for some coding category ideas.
Social Media Listening Tools are tools for monitoring, collecting and analysing data from social media platforms, including the content of social media publications as well as its metrics and metadata. The majority of the social media listening tools are proposed as commercial products, providing paid access to data from the major social media platforms via an online dashboard. Some other social listening tools are free or open source and usually provide acess to one platform or a set of alternative platforms. A third group of social media listening tools is made available by the social media companies themselves, it is usually free (although with restricted acess), but provides data only for the platforms operated by that company.
There are dozens of different social media listening tools, each wiith it's own charateristics. The following tables display the main features of some of them:



»» Full social media tools dynamic database at this link.
Most of the tools listed in these tables are able to support - in total or in part - the social media monitoring on election observation missions. In the last section of this Toolkit we described how to implement the EOM and EEM frameworks using one of these tools, SentiOne. But those methodological frameworks cam be implemente using most of these social listening tools, especiially if they are able to access the most important social media platforms, which the case of Brandwatch, Newswhip, Melwater or Fanpage Karma, for example. All these tools have different user interafaces but operate in very similar ways.
Throughout the use of some of these tools on election observation missions all over the world we've sistematically collected feedback from the analysts on its adequacy for that specific context. The following sections report on the feedback regarding some of those tools.
SentiOne is a commercial Social Listening Tool, of polish origin, capable of monitoring all main social media platforms, including Facebook, Instagram, X, TikTok and YouTube. For including Facebook and Instagram, SentiOne can be considered an alternative to CrowdTangle. This tool was piloted in Jordan, Sri Lanka and Mozambique. The following table summarizes the reporting of the analysts regarding SentiOne:
| PROS | CONS |
| In Sri Lanka, SentiOne managed to collect data from all the accounts included in the lists. That means the pool provided good coverage (tracking all the accounts to be monitored), at least in this instance. | The platform is unable to recognise Facebook accounts which are not fan pages or business profiles (as was the case with some actors in Mozambique). [note: this is extensive to most social listening tools] |
| While there is a cap of 10 projects, within one project, users can monitor multiple social media channels such as Twitter, Instagram, Facebook and others | SentiOne includes 10 Projects (each project corresponds to a list of accounts or a search query), but in certain countries and certain types of elections, that can be a limitation. The analyst should manage the projects available to perform the monitoring. |
| Facebook reactions are presented in a disaggregated way, which allows analyst to perform analysis of positive and negative reactions (something not always available in other tools). | Lists of accounts can’t be uploaded in bulk, which means additional work and time by the team. Additionally, some accounts are difficult to find in SentiOne (eg: Instagram accounts). Some Instagram links require manual assistance from the SentiOne team for crawling and adding to projects. Facebook pages without proper IDs need to be added manually using the page's ID, which requires finding it in the page source of the Facebook profile, which is time consuming |
| Format of the output (CSV or Excel) is always the same independent of the social media platform from which data is extracted, which makes it easy to compare different platforms | Metrics - likes, reactions, comments, shares, etc - can sometimes be lower in SentiOne than on the social platform itself. This was especially apparent on Facebook in Sri Lanka (on Instagram, for instance, the metrics are acurate). Missing Posts: SentiOne may miss new or entirely fresh posts (Jordan) |
| Except for the impossibility of bulk upload and the difficulty to find some accounts to monitor (on Instagra, for instance), overall SentiOne is easy and straightforward to use. Interface is easy to operate. The platform has an intuitive user interface that simplifies monitoring and analytics for all levels of users. | When monitoring the publications of a given account, Sentione may include publications by other accounts mentioning that account, which pollutes the sample. There are ways to circumvent this problem, but the analyst has to be aware of the problem and of those solutions to it. |
| Good record of number of publications and the number of interactions (Mozambique) | The platform does not assess accurately the sentiment (tone) of many of the users’ comments, resulting in inaccurate charts |
| Technical support is accessible and helpful. The customer support team responds quickly and is helpful with resolving issues or addressing inquiries, they also support in adding links that the user has a difficult time adding. | Does not include data about Facebook Reels |
| SentiOne has several features that make it a useful tool for Facebook monitoring and data analysis. | Time difference between Mozambique and Poland (SentiOne support) inserts difficulty into the support process |
| Strong AI Analytics for Latin Languages: The AI analytics tool performs exceptionally well with Latin-based languages, providing in-depth insights. Arabic Language Support: The platform is capable of reading and analyzing Arabic text, which enhances its utility in the Middle Eastern market. | Sentione exhibits some weaknesses that affect the needs of the monitoring, namely inaccuracy in some areas |
Gerulata is also a commercial Social Listening Tool originating from Slovenia. It provides support for the major social media platforms, including Facebook, Instagram, X, TikTok, YouTube and Telegram. It allows you to track content from a list of accounts or from a search keyword or query. This tool was piloted in the Sri Lanka EOM. The following table summarizes the reporting of the analysts regarding Gerulata:
| PROS | CONS |
| List of accounts to monitor can be added in bulk, which facilitates the job of the analyst | Data from accounts monitored is not immediately available and may take up to 24 hours to show up on the tool. |
| In Sri Lanka, Gerulata managed to collect data from all the accounts included in the lists. That means the pool provided good coverage (tracking all the accounts to be monitored), at least in this instance. | Data available for download is less complete and comprehensive than on SentiOne. For example, reactions are not disaggregated. This allows some analysis but limits others. |
| Overall, Gerulata is easy to use. The fact that it allows the bulk upload of account lists and that it suggests other accounts to follow facilitates the work of the analyst. | There are discrepancies between the metrics displayed in Gerulata and those that can be seen on the social media platforms itself, but those discrepancies are smaller than on SentiOne. In Sri Lanka, Gerulata achieved good accuracy on Facebook, but insufficient accuracy on Instagram and X. X data does not include views. |
| Able to provide access to a post, even if it has been unpublished by the platform. | Depending on the contract, limits on the number of targets may be an issue (a target is a social media account, which means one candidate may require several targets). In Sri Lanka, targets had to be increased from 100 to 500 during the mission. |
| Gerulata archives all content from targets in its own servers, which means it is slower to make data available, because it has to fetch the data. On the other hand, that has the advantage of being able to provide access to a post, even if it has been unpublished by the platform. |
Newswhip is a content discovery and social media monitoring platform originating from the United States that helps users track and predict trends by analyzing social media engagement. It covers Facebook, Instagram (for which announces best coverage), YouTube, Pinterest, Reddit and TikTok. This tool was piloted in the Jordan EOM. The following table summarizes the reporting of the analysts regarding NewsWhip:
| PROS | CONS |
| The tool does not support reading Arabic, limiting its usefulness in regions where Arabic is prominent | |
| The training provided in this instance was insufficient for a thorough understanding of the platform’s capabilities. | |
| The customer service team was not responsive or helpful, resulting in a lack of support for the users. lack of communication from customer service. [note: team was using a limited version of the tool, without regular support or training] |
Sotrender is a social media listening tool from Poland. In basic tiers it provides access to Facebook, Instagram, Linkedin and YouTube, while in the most expenside price tier also includes support for Telegram, TikTok and X. This tool was piloted in the OEM Mission Sri Lanka, alongside SentiOne and Gerulata. The following table summarizes the reporting of the analysts regarding Sotrender:
| PROS | CONS |
| Extensive coverage (tracking all the accounts to be monitored), especially for Facebook an Instagram. | Focus on monitoring users' own accounts. No suitable for am election observation mission. |
| Easy to use. | Each account has to be uploaded indidivudally and needs to be tracked by the system, which is time-consuming. |
| As it is focused on own accounts, Sotrender reports include only one account at the time, thus not allowing the construction of monitoring lists. | |
| Provides only aggregated metrics, which is insufficient for a full analysis. | |
Cyclops, a tool developed in Slovenia by Cronos Europa for tracking disinformation was also initially considered for piloting in one of the missions, but first contact with the tool made manifest some limitation for its use, namely: a) It does not include metrics; b) it does not allow CSV downloads; and c) does not provide historical date. For those reasons, Cyclops ended up not being piloted.
The piloting of different tools to support the work of social media monitoring during election missions will continue in the future and the feedback resulting from it will continue to enrich this Toolkit, as a response to the ever-changing landscape of social media monitoring tools.
The goal of the methodological frameworks for election observation missions is to assure consistency, objectivity and transparency in the monitoring, collection and analysis of the data. The final outcome expected is that the qualitative analysis of the areas os assessment may be supported by quantitative data. The role of the data visualization tools is to present that data in clear and concise ways, so that the reader can understand the data and so that the analyst can visualize further connections between different datapoints.
There are a vast offer of different visualization tools on the market, free or paid, simpler or more complex and self-sustained or integrated with ddatabases. The following table displays a non-exhaustive list of those tools.

»» Full social media tools dynamic database at this link.
Most data visualization tools can integrate with external accounts - like Google Drive or Microsoft OneDrive - in order to collect the data to visualize form the Google Sheets or Microsoft Excel files where is is housed. But most of these tools also allows the analyst to inscribe or paste the data direcly in the tool for visualizations. On the other hand, some of these tools offer the possibility of designing report templates, inclunding several tables and charts, while others only permit one chart or table at the time, which may be the right solution if we want to integrate the data visualizations in a word or PDF document.
The quantity of templates, the graphic solutions and the user interface are the main issues that distinguish between these different tools. That's why, when drafting a report, it may be advisable to design all charts and tables using the same tool, to assure consistency and maintain the look&feel of the tables and charts.
Some social listening tools - like SentiOne or Brandwatch, for instacne - already integrate sections for datra visualization that work directly with the collected data. Those sections may be useful for the social media analyst to explore the data, but, for consistency, it would still be recommendable to use one sinle data visualization tool.
For EODS election observation missions, we recommend the use of Datawrapper for data visualization purposes. Datawrapper is owned and developed by an european company, originating from Germany, its free tier is adequate for the data visualization needs of the missions and the tool is powerful and easy to operate. Plus, it allows the creation of teams that can share data, charts and tables.
The way to use Datawrapper is to copy and paste the data directly to the tool or connect it with the Excel or Google Sheets files to which the data was collected and on which it was manipulated. The following sccreenshots ilustrate how to use the tool. Further information can be found at the Academy section.
When you want to create a new visualization, you can choose between a chart, a map or a table. Each of this options will open the templates available for each. You can also create and use a group of templates specifically selected and/or created for your team. Additionally, you may also try to explore the River section, where new charts and tables that have been used or created are shown and you can reuse. Once you find the template that suits the data you want to visualize, you can choose that tamplate by creating a copy.


The first step to begin creating your chart or table (or map) is to upload the data. That can be made simply by copying and pasting in the designated area or connect to an Excel file in your desktop or to a Google Sheet (or any other format) on an external drive. Make sure the data you are uploading is only that which you want on the chart. If you have an Excel file or Google Sheet with multiple columns and rows, previously prepare a file or sheet with just the colune and rows you want on the chart or table. On the section Check & Describe you can check if the data uploaded is correct and change the identification of the data, if needed.

Once you have uploaded the data, you can choose the type of chart (or table) you want to use and visualize the results. Then, on the Refine, Annotate and Layout tabs, you can finish your chart or table. In the Refine tab you can change several configurations of the chart and in the Annotate tab you can give the chart a title. Remember to make a title that is short but immediately clear for the reader. If you want, you can use the "Descripton" field and/or the "Notes" field to make the content of the chart more clear. You can then proceed to Publish & Embed to use it as a PNG to insert in your report or embed it or link it in a webpage.

The visualization of the data is the last step in the analysis. So, get to it after you've explored and manipulated (on Excel or Google sheets) the data that you have collected, namely by selecting specific columns or rows, calculating sums, averages or regressions. The refine data you get from that process is the data that you should input into Datawrapper, because that is the final stage of the process. However, a data visualization tool such as Datawrapper can also be useful in the exploration of the data, so that you can visualize the data in diferent ways and compare different sets of data. You can do thata simply by using the copy & paste feature.
Social Media Monitoring Annex Template for Final Report
Social Media Monitoring Annex Template for Final Report
Terms of Reference for National Staff Social Media Monitors
Attacks against Election Observation Missions (EOMs)
Check List
Media Analyst Annex
Guidance Note
Briefing Note
Guidelines for observing election content
Observe and Assess Online Campaigns
Guidelines