GeoPoll https://www.geopoll.com/ High quality research from emerging markets Mon, 23 Mar 2026 11:56:34 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 Report: Global South Perceptions the Iran–Israel–U.S. Conflict https://www.geopoll.com/blog/iran-israel-us-conflict-report/ Mon, 23 Mar 2026 11:52:12 +0000 https://www.geopoll.com/?p=25506 Back Home   Report: What Citizens Across the Global South Really Think About the Iran–Israel–U.S. Conflict Deep economic anxiety, stark regional divides […]

The post Report: Global South Perceptions the Iran–Israel–U.S. Conflict appeared first on GeoPoll.

]]>

Report: What Citizens Across the Global South Really Think About the Iran–Israel–U.S. Conflict

Deep economic anxiety, stark regional divides on blame and sympathy, declining trust in Western powers and media, and an overwhelming demand for peace.

The Middle East conflict has consumed headlines, but one voice has been largely missing from the conversation: that of the billions of people across the Global South who are bearing the economic and social fallout.

In March 2026, GeoPoll surveyed citizens across  Pakistan, Saudi Arabia, Egypt, Kenya, Nigeria, and South Africa, to understand how they perceive, experience, and respond to the escalating Iran–Israel–U.S. conflict. The findings are striking and carry direct implications for governments, international organisations, and media institutions.

100% are aware of the conflict. Very high.

70% believe the use of nuclear weapons is likely

72% believe this might be the start of World War 3

38% blame Israel for the war; 29% the US;18% Iran

43% view the U.S. less favourably due to the war

25% say Western media is misleading

70% say fuel prices significantly affected

69% are "very concerned" about the cost of living

54% want their govt to call for peace

The free 37-page report has detailed country-level breakdowns, cross-tabulations, open-ended citizen responses in three languages, comparisons with on-the-ground realities, and actionable policy recommendations, and is essential reading whether you are a policymaker, diplomat, development practitioner, journalist, researcher, or simply curious about how this conflict is reverberating far beyond the Middle East

Fill in this form to download the full report (free):

The post Report: Global South Perceptions the Iran–Israel–U.S. Conflict appeared first on GeoPoll.

]]>
Annual Report: The Gender Equality Report 2026 https://www.geopoll.com/blog/the-gender-equality-report-2026/ https://www.geopoll.com/blog/the-gender-equality-report-2026/#respond Thu, 12 Mar 2026 09:01:14 +0000 https://www.geopoll.com/?p=25499 Imagine two colleagues at the same company. One believes their workplace is fair and equitable. The other, sitting at the desk next […]

The post Annual Report: The Gender Equality Report 2026 appeared first on GeoPoll.

]]>
Imagine two colleagues at the same company. One believes their workplace is fair and equitable. The other, sitting at the desk next to them, experiences something very different every day.

That gap, the space between what we believe and what people actually live, is at the heart of what GeoPoll set out to measure early March 2026.

Our Africa Gender Survey 2026 is now available. It surveyed 2,420 respondents across Kenya, Nigeria, South Africa, and Egypt, and what came back was a picture that is at once hopeful and sobering.


The Perception Gap Is Real, and It Lives in the Workplace

Ask people at a societal level whether men and women have equal opportunities, and 65% will say yes. Ask those same people specifically about their own workplace, and that figure drops to 52%.

Thirteen percentage points. That is the distance between what we believe in the abstract and what people experience on the ground, and it is arguably the most important finding in this report.

It gets sharper when you break it down by gender. Among employed respondents, 57% of men say their workplace treats everyone equally. Only 46% of women say the same. The most commonly observed expression of this inequality? Nearly half of respondents noted that women are concentrated in junior roles while men dominate senior leadership.

These are not perceptions of a faraway problem. They are descriptions of where people spend most of their waking hours.


Nearly Half of Respondents Have Experienced Sexual Harassment

47% of respondents across the four markets reported having personally experienced some form of sexual harassment.

That number on its own should give us pause. But the gender breakdown is where it becomes impossible to look away: nearly 6 in 10 women, 59%, reported personal experience of sexual harassment, compared to 35% of men. And the most frequently cited location was not a dark alley or a public space. It was the workplace, at 51%.

The survey also found that harassment in educational settings was reported by 32% of respondents, a figure that carries serious long-term consequences for girls’ participation and outcomes in learning.

What makes this harder to address is not awareness. It is action. Research on African media organisations, cited in the report, found that while one in two women experienced workplace sexual harassment, only 30% of cases were ever formally reported. Fear of retaliation and a lack of confidence that organisations would respond meaningfully were the primary reasons.


81% of People Believe Women Should Inherit Equally. So Why Doesn’t It Happen?

The answer, according to 62% of respondents, is cultural and traditional beliefs.

This is one of the most striking tensions in the entire report. There is near-universal normative agreement that women should have equal property inheritance rights, 81% affirming it across the full sample. Yet the primary barrier is not law. It is not policy. It is deeply ingrained cultural practice that continues to override formal rights in communities across Kenya, Nigeria, South Africa, and Egypt.

Religious interpretations were cited by 10% of respondents, while 12% pointed to the perception that women are not permanent family members, a view that leaves widows particularly vulnerable. The gap between what people say they believe and what they are willing to enforce in their own families and communities is one of the hardest problems in gender equality work, and this survey makes it visible.


75% Would Vote for a Female President. But None of These Four Countries Has Had One.

That contrast says a great deal.

Across all four markets, 75% of respondents said they were likely or very likely to vote for a female presidential candidate. Only 15% said they were unlikely to do so. Public attitudes, it seems, have moved ahead of political structures.

None of the four countries surveyed, Kenya, Nigeria, South Africa, or Egypt, has had a female president or head of government. Yet voters say they are ready. That points clearly to structural and institutional barriers as the real obstacle, not public resistance.

The data also shows that support for improving access to justice is the top priority action respondents want from governments, selected by 79% of the sample. Expanding education and training came second at 66%, and increasing women’s leadership third at 58%.


Download the Full Report

The findings above are a starting point. The full GeoPoll Africa Gender Survey 2026 covers:

  • Country-level breakdowns for Kenya, Nigeria, South Africa, and Egypt
  • Gender-disaggregated data across every key topic
  • Awareness, exposure, and witness rates across nine types of gender-based violence
  • Household financial decision-making and economic empowerment
  • AI usage patterns and attitudes by gender
  • Conclusions and actionable recommendations for governments, organisations, and advocates

The data exists to inform decisions. We hope it does.



Contact Us

For more information about this project, get clarifications on any section in the data, or to learn more about our capabilities, please feel free to contact GeoPoll.

The post Annual Report: The Gender Equality Report 2026 appeared first on GeoPoll.

]]>
https://www.geopoll.com/blog/the-gender-equality-report-2026/feed/ 0
AI in Research: Design and Problem Definition https://www.geopoll.com/blog/ai-research-design/ Tue, 17 Feb 2026 13:23:21 +0000 https://www.geopoll.com/?p=25451 Part 2 of our series on integrating artificial intelligence into the research process The email lands on a Monday morning. A client, […]

The post AI in Research: Design and Problem Definition appeared first on GeoPoll.

]]>
Part 2 of our series on integrating artificial intelligence into the research process


The email lands on a Monday morning. A client, let’s say a development organization working across Africa, needs to understand how communities are adapting to climate shocks. They have funding, a timeline, and a genuine need for answers. What they often lack is a fully developed research design.

“We trust you to figure out the best approach,” they write. “You are the experts.”

This is how most research projects begin. Not with a polished methodology section, but with a problem that needs solving and a partner trusted to translate that problem into rigorous inquiry. The space between “we need to understand X” and a fieldwork-ready research design is where some of the most consequential decisions get made.

It is also where AI is proving unexpectedly useful.

The Messy Reality of Research Design

Research design isn’t linear. It is iterative, collaborative, and often constrained by factors that have nothing to do with methodological purity, such as budget limits, timeline pressures, data availability, political sensitivities, and client expectations.

The process typically involves:

  • Clarifying what the client actually needs to know (which isn’t always what they initially ask for)
  • Understanding what’s already known about the topic
  • Identifying the right questions to answer the underlying need
  • Determining what methodology will yield credible answers given real-world constraints
  • Anticipating what could go wrong and designing around it

Experienced researchers carry much of this in their heads – pattern-matched from dozens of similar projects. But that expertise is hard to scale, and even veterans have blind spots.

This is where AI enters the picture. Not as a replacement for research expertise, but as a thinking partner that can hasten and strengthen each stage of the design process.

Vague Brief to Sharp Research Questions

Let’s return to our climate adaptation project. The client’s initial brief is broad: “understand how communities are adapting to climate shocks.” That’s a starting point, not a research question.

The first task is understanding what they actually need. Are they interested in documenting existing adaptation strategies? Measuring their effectiveness? Understanding barriers to adoption? Identifying which populations are most vulnerable? All of these could fall under “climate adaptation,” but each implies a different study.

AI can help here by:

Generating structured questions that surface unstated assumptions. Feed the brief into a well-prompted model, and it will return a list of clarifying questions the research team should ask: What types of climate shocks? What timeframe? Which communities? What decisions will this research inform?

Mapping the problem space. AI can quickly generate a conceptual map of related variables, potential frameworks, and dimensions worth considering. This isn’t definitive. It’s a starting point for discussion that ensures nothing obvious gets overlooked.

Suggesting alternative framings. Sometimes, the most valuable thing a research partner can do is reframe the question. A model trained on diverse research, such as GeoPoll’s specifically tuned AI Engine, can propose angles the client hadn’t considered, shifting the focus from “how are communities adapting?” to “what predicts successful adaptation?” or “where are adaptation efforts failing, and why?”

None of this replaces the conversation with the client. But it compresses what might take several rounds of back-and-forth into a more focused initial discussion.

What’s Already Known, and AI-Assisted Literature Review

Good research design requires understanding the existing landscape. What have others found? What methodologies have worked? Where are the gaps?

Traditional literature review is time-intensive. Researchers spend hours searching databases, scanning abstracts, reading papers, and synthesizing findings. For a well-funded academic study, this investment is appropriate. For a rapid-turnaround applied project with a six-week timeline, it’s often impractical.

AI doesn’t replace rigorous literature review, but it dramatically accelerates preliminary synthesis:

Rapid landscape mapping. Within minutes, AI can summarize what’s broadly known about a topic, identify key debates, and flag seminal studies worth reading in full. This gets the research team to baseline understanding faster.

Identifying methodological precedents. “How have others studied climate adaptation in Africa?” is a question AI can answer with reasonable accuracy, pointing toward approaches that have worked and those that have faced criticism.

Surfacing gaps. AI can synthesize what exists and help identify what doesn’t: unanswered questions, understudied populations, and untried methodologies. These gaps often become the most valuable research opportunities.

Cross-disciplinary connections. AI doesn’t respect academic silos. It might surface relevant work from behavioral economics, anthropology, or public health that a researcher siloed in their own discipline might miss.

The important caveat is that AI-generated literature summaries require verification. Models can hallucinate citations, mischaracterize findings, or miss recent work. The output is a starting point for human review, not a finished product.

Designing for Constraints

Every research project operates within constraints. Budget caps what’s possible. Timelines limit depth. Access determines who can be reached. Political sensitivities shape what can be asked.

Experienced researchers chart these tradeoffs intuitively. AI can make that navigation more systematic:

Scenario modeling. Given a fixed budget, what sample sizes are achievable across different methodological approaches? A trained AI model can quickly model tradeoffs – a larger sample with phone surveys versus a smaller sample with in-person interviews, helping teams make informed decisions.

Risk identification. What could go wrong? AI can generate a preliminary risk register based on the project parameters: potential for low response rates in certain regions, sensitivity of particular questions, logistical challenges in specific geographies. This isn’t exhaustive, but it prompts the team to think through contingencies.

Methodology matching. Given the research questions, constraints, and context, what methodological approaches make most sense? AI can suggest options the team might not have considered and flag potential limitations of each.

Pressure-Testing Assumptions

Every research design rests on assumptions, about respondent behavior, about data quality, about what questions will actually measure what you intend them to measure.

AI is useful for stress-testing these assumptions before fieldwork begins:

Anticipating respondent interpretation. How might a question be understood differently across contexts? AI can simulate diverse respondent perspectives, flagging potential misinterpretation before you’re in the field. This is one of a few areas where GeoPoll uses synthetic data.

Identifying confounding variables. What factors might influence the outcomes you’re measuring that aren’t captured in your design? AI can generate lists of potential confounds worth considering.

Checking logical consistency. Does the research design actually answer the research questions? It’s surprisingly easy for these to drift apart. AI can serve as a check, mapping questions to design elements and flagging gaps.

What AI can’t do in Research Design

It would be easy to overstate AI’s role here, so let’s be clear about the limits.

AI can’t define what matters. The strategic decisions, such as what questions are worth answering, what tradeoffs are acceptable, and what the research should ultimately accomplish, remain human judgments. AI can inform these decisions; it can’t make them.

AI doesn’t understand context the way practitioners do. A model doesn’t necessarily know that a particular region has experienced recent political upheaval that will affect response patterns, or that a certain phrasing carries unintended connotations in local dialect. Contextual knowledge is irreplaceable.

AI can’t navigate relationships. Research design is often negotiated with clients, partners, communities, and institutions. The interpersonal work of aligning stakeholders, building trust, and managing expectations is entirely human.

AI outputs require judgment. Everything AI produces in the design phase needs evaluation by experienced researchers. The model doesn’t know when it’s wrong. Humans have to.

How to Integrate AI into Research Design

The most effective use of AI in research design follows a consistent pattern:

  1. Human defines the problem and constraints. The client’s need, the project parameters, and the contextual factors come from people.
  2. AI powers exploration. Literature synthesis, question generation, methodology options, risk identification, and AI compresses what would otherwise take days into hours.
  3. Human evaluates and decides. Every AI output gets filtered through research expertise. What’s useful gets kept; what’s off-base gets discarded.
  4. The cycle repeats. Design is iterative. AI can be brought back in at each stage to pressure-test, expand options, or check consistency.

This is not AI replacing researchers at the research stage. This is actually one of the areas where human experts are critical because it can make or break research. It’s AI amplifying what good researchers already do – asking better questions, considering more angles, anticipating more problems- at a pace that matches real-world project timelines.

Questionnaire Development

Research design ultimately culminates in the instruments you will use to collect data: the questionnaire, discussion guide, or observation protocol. AI has significant applications here as well, from drafting and iteration to translation and cognitive testing.

We’ll cover questionnaire development in depth later in this series. For now, the key point is that stronger upstream design – clearer questions, better understanding of context, more thoroughly considered methodology – makes instrument development faster and more effective.

Looking Ahead

Thinking about the climate adaptation project we started with, with AI assistance, the research team can move from a vague brief to a detailed design proposal in a fraction of the time it once required. The proposal is sharper because more options were considered. The methodology is stronger because more risks were anticipated. The questions are better because more assumptions were tested.

None of this guarantees good research. That still depends on execution, judgment, and the irreplaceable expertise of people who understand what they’re studying. But the foundation is stronger.


Working on a research design challenge? We’d welcome the conversation. Contact GeoPoll to discuss how we approach complex projects across diverse contexts.

The post AI in Research: Design and Problem Definition appeared first on GeoPoll.

]]>
AI in Research Series: Where we are and where it actually works (or not) https://www.geopoll.com/blog/ai-in-research/ Tue, 03 Feb 2026 11:17:08 +0000 https://www.geopoll.com/?p=25441 The first in a series on integrating artificial intelligence into the research process. AI has become one of those words that’s everywhere, […]

The post AI in Research Series: Where we are and where it actually works (or not) appeared first on GeoPoll.

]]>
The first in a series on integrating artificial intelligence into the research process.

AI has become one of those words that’s everywhere, a buzzword in boardrooms, a curiosity in most conversations, professional or social, and increasingly, a quiet presence in how work actually gets done. According to Google’s Our Life with AI Report, 48% people globally now use AI at work at least a few times a year, with writing and editing tools among the most common applications. Among content professionals, the numbers are even higher: over 70% use AI for outlining and ideation, and more than half use it to draft content.

The adoption curve is real. But so is the uncertainty. In Stack Overflow’s 2025 developer survey, 84% of respondents use or plan to use AI tools, yet 46% say they don’t trust the accuracy of the output. People are using AI. They’re just not sure how much to believe it.

For researchers, this tension is especially acute. Our work demands rigor. It requires accuracy, nuance, and accountability, qualities that don’t pair naturally with tools known for confident-sounding hallucinations. And yet the potential is hard to ignore: faster questionnaire development, smarter quality assurance, analysis at scales that weren’t previously practical.

So where does that leave us? Adoption. For all the attention it receives, much of the conversation remains polarized. On one end is hype: claims that AI will “replace research as we know it.” On the other is skepticism: a belief that AI is fundamentally incompatible with rigorous, ethical, human-centered inquiry.

The reality sits somewhere in between.

As our CEO, Nicholas Becker wrote in this article, AI is not changing why research is conducted. It is changing how it is conducted, and in doing so, it is forcing the research community to revisit long-held assumptions about quality, speed, scale, and responsibility.

This post and the series that follows aim to fill that gap. We will share what we have learned about where AI genuinely adds value in research, where it falls short, and how to think about integration in ways that strengthen rather than complicate your work.

The Current Landscape

AI adoption in research is uneven, and for understandable reasons.

Some organizations, such as GeoPoll, are experimenting aggressively and automating significant portions of their analysis workflows. Others are watching and waiting, uncertain whether the tools are mature enough to trust with work that demands rigor.

Both positions are reasonable. The gap between what AI can do in controlled demonstrations and what it reliably does under field conditions is real. A tool that performs impressively on clean, English-language data may struggle with the realities of multilingual surveys, low-connectivity environments, or the cultural nuance required to interpret responses from communities the model has never encountered.

This is particularly true for research in emerging markets and complex settings, exactly the contexts where good data is most needed and hardest to collect. The assumptions baked into many AI tools often reflect their training environments: high-resource languages, stable infrastructure, Western cultural frameworks. When those assumptions don’t hold, performance degrades in ways that aren’t always obvious.

None of this means AI isn’t useful. It means we need to be specific about where it works, honest about where it doesn’t, and thoughtful about how we integrate it.

Where AI Genuinely Adds Value

Let’s start with what’s working. These are applications where the technology is mature enough to deliver consistent value, and where we have seen real improvements in efficiency, quality, or both.

1. Research Design and Problem Definition

Early-stage research design has always been one of the most human-dependent phases of the process. Defining the right question, aligning objectives, and translating abstract goals into measurable constructs requires judgment, domain knowledge, and contextual awareness.

AI can support this stage by synthesizing large volumes of background material, identifying recurring themes across prior studies and stress-testing logic, assumptions and consistency in objectives.

This is one of the very few places where GeoPoll uses synthetic data – to simulate real-world possibilities and tighten the research design.

However, AI cannot determine what matters. It can help refine how a question is phrased, but it cannot decide whether the question is meaningful, relevant, or appropriate for a given context. That responsibility remains firmly human.

2. Questionnaire Development and Translation

In relation to the research design above, AI has also become a genuine accelerator in the early stages of instrument design. AI can generate initial question drafts, identify ambiguous phrasing, suggest alternative wording, and flag potential sources of bias. They are particularly useful for cognitive pretesting, helping you anticipate how respondents might misinterpret questions before you’re in the field.

Translation and back-translation workflows have also improved significantly. While human review remains essential, AI can produce working drafts faster and more consistently than traditional approaches, freeing skilled translators to focus on nuance rather than first passes.

This has been particularly useful to us as we conduct several multicountry and multilingual surveys. Using thousands of our past translated questionnaires, we have trained our own models to produce translations that are close to fine, which makes the work a lot easier and more efficient for our translation teams to only review.

3. Quality Assurance and Data Cleaning

Quality control is where AI’s pattern recognition capabilities shine. Real-time monitoring during data collection can flag anomalies. Interviews completed suspiciously fast, response patterns that suggest straightlining or satisficing, geographic inconsistencies, or interviewer behaviors that warrant review.

The value here isn’t replacing human judgment but directing it more efficiently. Instead of reviewing random samples, quality teams can focus attention where it’s most needed. Fraud detection, in particular, has become significantly more sophisticated with machine learning approaches that identify coordinated fabrication patterns humans might miss.

4. Analysis and Insight Generation

Anyone who has manually coded thousands of open-ended responses understands the appeal of automation. Natural language processing, again, with well-trained models such as the one GeoPoll Senselytic uses, can now handle initial coding, theme extraction, and sentiment analysis at scale. Work that previously consumed enormous time and introduced its own inconsistencies.

The keyword is “initial.” AI-generated codes require human review, and the categories need refinement based on contextual understanding the model might lack. But as a first pass that analysts then validate and adjust, the efficiency gains are substantial. Also, analysis is not insight. AI can surface patterns, but it may not fully understand causality, significance, or implication in the way decision-makers require. Without human interpretation, there is a real risk of over-fitting narratives to statistically convenient patterns.

Then feed the results back into the model and continuously improve its capabilities for next time.

5. Reporting, Visualization, and Storytelling

Beyond analysis, AI streamlines the communication of findings: drafting report sections, generating visualization options, summarizing results for different audiences, and adapting technical findings into plain narratives.

For organizations producing high volumes of research, this represents significant time savings. First drafts that once took days can be generated in hours, freeing researchers to focus on refinement, interpretation, and strategic recommendations.

6. Operational Efficiency

Beyond the research process itself, AI streamlines the operational work that surrounds it: drafting reports, cleaning and restructuring data, generating documentation, and summarizing findings for different audiences. These applications are less glamorous but often deliver the most immediate time savings.

But Human Judgment Remains Essential

Listing AI’s capabilities without acknowledging its limitations would be both incomplete and misleading. There are aspects of research where human judgment isn’t just preferable, it’s irreplaceable.

1. The Foundation

Deciding to conduct research does not begin at the research design stage. It starts with a real problem an organization needs to solve. AI can help refine questions, but it can’t tell you which questions matter. The strategic decisions that shape a study – what to measure, why it matters, how findings will be used – require understanding of context, stakeholders, and objectives that models don’t possess. This is where research value is created or lost, and it remains fundamentally human work.

2. Contextual Interpretation

Data doesn’t interpret itself. Understanding what a response pattern means requires knowledge of local context – political dynamics, cultural norms, recent events, historical relationships – that AI tools lack. A model might identify that responses in a particular region differ from the national average; understanding why they differ, and what that implies for the research question, requires human insight.

This is especially critical in cross-cultural research, where the same words can carry different meanings, and where what’s left unsaid is often as important as what’s captured in the data.

3. Ethical Judgment

Research involves ongoing ethical decisions: how to handle sensitive disclosures, when informed consent requires additional explanation, how to protect vulnerable respondents, whether certain questions should be asked at all in particular contexts. These judgments require moral reasoning, empathy, and accountability that can’t be delegated to algorithms.

4. Stakeholder Relationships

Research happens within relationships – with communities, partners, clients, and institutions. Building trust, navigating sensitive topics, communicating findings in ways that lead to action rather than defensiveness: these are human skills that no AI will replicate. The credibility of research ultimately rests on the people behind it.

5. Final Analytical Decisions

AI can surface patterns and generate hypotheses, but the final interpretive decisions – what the data means, how confident we should be, what recommendations follow – belong to researchers. The stakes of getting this wrong are too high, and the accountability too important, to outsource.

The Integration Question

Based on all this, the question isn’t whether to use AI but how to integrate it without breaking what already works.

The most sustainable approach treats AI as an augmentation rather than a replacement. The goal isn’t to automate researchers out of the process but to free them from tasks where their judgment adds less value, so they can focus where it adds more. AI handles the volume while humans handle the judgment.

This requires what’s often called “human-in-the-loop” workflows: processes designed so that AI outputs are reviewed, validated, and refined by people before they influence decisions. It’s slower than full automation, but it’s also more reliable and more accountable.

It also requires building internal capacity. Organizations that outsource AI entirely to vendors risk losing understanding of how their research is actually being conducted. The teams that will use AI most effectively are those that understand it well enough to know when it’s helping and when it’s not.

In our work at GeoPoll, we see AI as a tool that strengthens research when it is embedded thoughtfully, not when it is layered on top as a shortcut. The most effective applications combine automation with clear methodological guardrails and continuous human oversight.

What This Series Will Cover

This article sets the foundation for a deeper exploration of AI across the research lifecycle. In the coming pieces, we will go into each stage in detail, looking closely at what works, what doesn’t, and what responsible use looks like in practice:

  • Research design and questionnaire development: From hypothesis to instrument
  • Sampling and recruitment: Reaching the right respondents
  • Data collection: Fieldwork in the age of AI
  • Quality assurance: Detection, monitoring, and validation
  • Analysis and interpretation: From data to insight
  • Reporting and visualization: Communicating findings effectively
  • Ethics and limitations: What AI can’t do, and why it matters

Each post will be practical and specific, drawing on real-world applications and our experience rather than theoretical possibilities.

GeoPoll’s Perspective

At GeoPoll, we have spent over a decade conducting research in some of the world’s most challenging environments—conflict zones, low-connectivity regions, rapidly evolving political contexts. We complete millions of interviews annually across more than 100 countries, in dozens of languages, using mobile-first methodologies designed for conditions where traditional approaches don’t work.

That experience has shaped how we think about and work with AI. We have seen what works when assumptions break down, when infrastructure isn’t reliable, and when the cultural context is unfamiliar to the models. We have learned through iteration, testing tools in the field, finding their limits, and building workflows that account for them. As a technology research company, we have built AI platforms and processes into our research and are actively employing AI to make our work easier and deliver greater value to our clients and partners.

This is the knowledge we are sharing in this series.

If you are thinking about how AI might strengthen your research, we would welcome the conversation. Contact us to discuss what’s working, what’s not, and where the opportunities might be.

The post AI in Research Series: Where we are and where it actually works (or not) appeared first on GeoPoll.

]]>
The Online Sampling Crisis: Why Bad Data is Rising and how to Stop it https://www.geopoll.com/blog/online-sampling-risks/ Mon, 01 Dec 2025 08:11:47 +0000 https://www.geopoll.com/?p=25413 Over the last few decades, online sampling and online panels have become a cornerstone of modern research – fast, scalable, and cost-efficient. […]

The post The Online Sampling Crisis: Why Bad Data is Rising and how to Stop it appeared first on GeoPoll.

]]>
Over the last few decades, online sampling and online panels have become a cornerstone of modern research – fast, scalable, and cost-efficient. But in recent years, the industry has been grappling with a serious, structural threat that has gone up sharply in the last few months. A growing share of online survey responses is unreliable, artificially generated, or outright fraudulent.

Research clients are feeling it. Actually, a few have reached out to us at GeoPoll recently to say that other panel providers delivered datasets full of questionable responses. As an example, we audited a dataset from one of these projects and found respondents claiming to work for companies that, after cross-checking, did not exist. That is not a minor quality issue, but a failure of the most basic layer of respondent verification.

The problem is not isolated. It is becoming pervasive, and it threatens the trustworthiness of survey research if left unchecked.

In this article, we break down what is happening, why it is happening, and, most importantly, what the industry must do about it.

Why online sampling is under pressure

The challenges the industry is experiencing step from pressures on

  • The explosion of bots and automated respondents – Fraudulent actors can now generate large volumes of convincing survey completions using tools that simulate human behaviour, including normalised click paths, varied timing, and even device switching. The barrier to entry is low, the incentives are high, and the fraudsters are increasingly sophisticated.
  • AI-generated open-ended responses – One of the downsides of generative AI to the industry is that it has introduced a new challenge: artificial open-ended responses that sound perfectly human but contain no personal context. This is especially dangerous because open-ended questions were once reliable indicators of quality. Today, AI models can produce responses that are linguistically rich yet completely unauthentic, which makes manual review far more difficult.
  • Panel fatigue and low engagement – A third pressure point is panel fatigue. In many markets, respondents are oversurveyed and under-engaged. As genuine participation declines, some panel providers fill quotas through loosely vetted traffic sources, unverified accounts, or third-party supplies whose quality mechanisms are opaque. This is often where “junk” data enters the chain, responses that look complete but crumble under scrutiny.
  • Nonexistent profiles and artificial identities – Beyond fake companies, we are now seeing invented educational histories, geographic misrepresentation through VPNs, and household profiles that defy demographic reality. Incentive-driven fraud compounds this by enabling entire online communities to trade survey links, completion codes, and tips for bypassing checks.

The result is a landscape where bad data can be gathered at scale, faster than many traditional panels can detect it, compounded by technology.

Even from our own tests using the GeoPoll AI Engine, AI models can now generate human-like narratives, differentiated “voices”, realistic demographic profiles, and varied completion speeds. The reality is that as long as incentives exist, fraudulent responders will continue to innovate.

Meanwhile, many panel providers rely on legacy systems built for a world where fraud meant speeding or straight-lining. They were not designed to detect AI paraphrasing, synthetic behavioural fingerprints, cross-platform identity laundering, and real-time pattern anomalies

This mismatch creates structural vulnerability.

What this means for researchers and clients

Poor-quality sample data has obvious consequences, the immediate of which include:

  • Misleading insights
  • Incorrect targeting
  • Wasted budgets
  • Incorrect strategic decisions
  • Damaged credibility

But the deeper consequence is even more serious: If the industry does not rebuild trust in online sampling, brands and organizations will hesitate to rely on survey research at all. When decision-makers cannot trust the integrity of respondent data, they begin to question the value of surveys as a method. This is the real risk—an industry-wide credibility problem.

A reliable respondent ecosystem rests on three foundations: identity, location, and behaviour.

Respondents must be tied to real, verifiable identities. Their location must reflect where they actually are, not where their VPN says they are. And their behaviour must reflect natural human variation—not the automated consistency of scripts, bots, or artificially generated text.

These are basic principles, but in an era of synthetic identities and AI-driven fraud, they require much more rigorous systems to uphold.

How the industry should respond

Online sampling is not going away; if anything, demand will increase. But the industry must adapt. Fraud is evolving faster than legacy panel systems can respond, and researchers cannot afford to rely on outdated assumptions about respondent authenticity.

The future belongs to providers who treat data quality as a core capability, and not a back-office function. Those who invest in verification, diversify sampling modes, apply advanced fraud detection, and communicate transparently will set the new standard. The rest will continue to generate “junk” data and erode trust in research.

Rebuilding trust in online sampling will require a combination of technology, methodological discipline, and transparency.

  • Strengthen Identity Verification: Email-based registration is no longer sufficient. Providers need to move toward systems grounded in SIM-based verification, mobile operator partnerships, two-factor authentication, and device-level identity checks. Emerging markets with national SIM registration frameworks have a distinct advantage here.
  • Detect Fraud Behaviourally: Quality control must evolve beyond speeding and straight-lining. Modern systems should detect unusual device patterns, inconsistent browser fingerprints, abnormal timing sequences, proxy use, and other signs of automation. This has to happen pre-survey, not only during data cleaning.
  • Use AI to Fight AI: Just as AI can generate deceptive responses, AI can also detect them. Linguistic analysis, stylometric fingerprints, and semantic anomaly detection are becoming essential tools for flagging artificial or copy-pasted open-ended text.
  • Apply Human Oversight on High-Stakes Work: For sensitive audiences or high-value projects, manual review remains indispensable. Calling back a sample of respondents, checking claims when relevant, or auditing open-ended text can act as guardrails against fraud that slips through automated systems.
  • Reduce Reliance on Third-Party Traffic: Panels built on first-party respondent networks, such as mobile communities, app-based samples, and telco-linked panels, are inherently more secure than those that rely on opaque third-party supply. Direct relationships create accountability and allow for deeper verification.
  • Blend Modes When Necessary: Some populations or markets simply cannot be reliably captured through online traffic alone. Combining online surveys with CATI, SMS, WhatsApp, in-person intercepts, or panel phone lists reduces exposure to any single failure mode and strengthens representativeness. This why, at GeoPoll, we live for multimodal approaches to research.
  • Be Transparent With Clients: Clear reporting on quality checks, verification processes, and exclusion rates builds trust. As fraud grows more sophisticated, transparency becomes a competitive advantage.

How GeoPoll approaches online sampling to reduce these risks

These issues are increasingly common, but they are avoidable with the right systems. GeoPoll’s platforms and processes are deliberately designed to protect data integrity and put the voice of real humans first. Our model was built for the types of environments where online sampling is now struggling most. Our respondent network is anchored in mobile-first infrastructure, with SIM-linked verification and direct partnerships that ensure respondents are real people, reachable through real devices.

We complement this with multi-mode data collection – CATI, mobile web, SMS, WhatsApp, app-based sampling, and in-person CAPI – so no single sampling method carries the full burden of quality. Our now AI-powered fraud detection systems track behavioural anomalies, detect AI-like response patterns, and monitor unusual activity across surveys. And for complex or high-stakes studies, our teams perform human review of suspicious profiles or open-ended answers.

Contact us to learn more about how we make sure your data collection is valid.

The post The Online Sampling Crisis: Why Bad Data is Rising and how to Stop it appeared first on GeoPoll.

]]>
The Seed Marketing Report: How Farmers in Kenya and Uganda Choose, Use, and Trust Maize and Bean Seeds https://www.geopoll.com/blog/seed-report-east-africa/ Wed, 19 Nov 2025 06:27:19 +0000 https://www.geopoll.com/?p=25399 GeoPoll and Resourced are pleased to release a new multi-year study that sheds light on how farmers in Kenya and Uganda discover, […]

The post The Seed Marketing Report: How Farmers in Kenya and Uganda Choose, Use, and Trust Maize and Bean Seeds appeared first on GeoPoll.

]]>
GeoPoll and Resourced are pleased to release a new multi-year study that sheds light on how farmers in Kenya and Uganda discover, evaluate, and adopt improved maize and bean varieties. Conducted under the Seed Marketing Insights & Adoption (SMIA) program, this research provides one of the most comprehensive demand-side perspectives on seed decision-making in East Africa, at a time when resilient, high-performing seed is more essential than ever for food security and agricultural economics.

Covering three waves of data collection from the 2023 baseline to the 2025 endline, the study tracks the evolution of farmer behavior across awareness, variety switching, preferred traits, purchasing barriers, marketing sources, and brand engagement. The findings show a changing landscape, where digital channels are rising, traditional networks are shifting, and farmers are becoming more informed and intentional in their seed choices.

What the Report Covers

This new report provides a comprehensive picture of how farmers make seed decisions across Kenya and Uganda. It explores the full journey — from awareness and trust, to variety switching, purchasing behavior, marketing channels, and engagement preferences. The analysis covers:

  • Seed sources and access pathways: How farmers obtain maize and bean seed, and how sourcing patterns have evolved across the three waves.
  • Barriers to purchasing improved seed: Financial, trust-related, and availability challenges that affect adoption.
  • Net Promoter Scores (Recommend and Knowledge): How farmers perceive the varieties they use — and how well they can recall specific brands or names.
  • Variety switching behavior: Who switches, why they switch, why they don’t, and how frequently switching occurs.
  • Preferred traits and performance priorities: What farmers value most, from yield and drought tolerance to maturity and vigor.
  • Marketing and information channels: Which platforms and communication channels influence farmers, including the rise of digital marketing and the stable importance of radio.
  • Social media usage and engagement preferences
  • How farmers use Facebook, WhatsApp, and other platforms, and which channels they prefer for interacting with seed companies.
  • Most preferred ways to engage with seed companies
  • How farmers want companies to communicate with them, from training events and demos to digital channels.

These sections together provide a complete, accessible, and actionable picture of how the seed market is evolving from the farmer’s perspective.

Download the Full Report

The full report provides analysis, demographic profiles, cross-country comparisons, and recommendations for building trust, driving adoption, and strengthening marketing strategies.



Interactive Dashboard

To complement the report, dig into this dynamic, interactive dashboard where you can explore trends by crop, country, age group, gender, and other demographics.


Contact Us

For more information about this project, get clarifications on any section in the data, or to learn more about our capabilities, please feel free to contact GeoPoll.

The post The Seed Marketing Report: How Farmers in Kenya and Uganda Choose, Use, and Trust Maize and Bean Seeds appeared first on GeoPoll.

]]>
Africa’s Digital Future Unfolds at MWC Kigali: Reflections from GeoPoll https://www.geopoll.com/blog/mwc-kigali-reflections/ Tue, 18 Nov 2025 08:23:08 +0000 https://www.geopoll.com/?p=25401 I had the opportunity to attend the GSMA Mobile World Congress (MWC) Africa 2025 in Kigali, Rwanda, one of the continent’s most […]

The post Africa’s Digital Future Unfolds at MWC Kigali: Reflections from GeoPoll appeared first on GeoPoll.

]]>
I had the opportunity to attend the GSMA Mobile World Congress (MWC) Africa 2025 in Kigali, Rwanda, one of the continent’s most influential gatherings of leaders in telecoms, technology, and digital innovation. Themed “From Smart to AI Smart: Africa’s Business Transformation Driven by AI,” this year’s event highlighted how artificial intelligence is rapidly shifting from experimentation to execution across sectors.

The conversations were dynamic, purposeful, and deeply aligned with GeoPoll’s mission of enabling organizations to access real-time, high-quality data across emerging markets. Below are some of my key reflections.

GeoPoll's JP at MWC Kigali

AI is the New Foundation of Africa’s Digital Transformation

A dominant takeaway from the conference was that AI is no longer a competitive edge, rather it is the foundation of business reinvention. Across industries, leaders demonstrated how AI and IoT are powering smarter agriculture, predictive analytics in fintech, intelligent automation in health and logistics, and data-driven policy design.

In a standout session moderated by Kitso Lemo (BCG), speakers including Mercy Ndegwa (Meta), Jamie Collinson (iSDA Virtual Agronomist), and Kevin Xu (Huawei Technologies) explored how AI is unlocking efficiency and inclusion across African economies.

The message was clear: Africa’s next leap forward depends on localized innovation powered by authentic African data.

Localization and the Data Imperative

Throughout MWC Kigali, participants emphasized the need for contextually relevant datasets to train AI models that reflect Africa’s languages, cultures, and consumer realities. This challenge is precisely where GeoPoll brings unique value.

Through GeoPoll AI Data Streams, we’ve built one of the world’s largest repositories of structured voice data from Africa with over 450,000 hours of verified recordings from more than 1 million individuals, spanning 100+ languages. These datasets are ethically sourced, demographically representative, and purpose-built for training Automatic Speech Recognition (ASR) models, Large Language Models (LLMs), and Generative Voice applications.

Localized datasets like these ensure that future AI systems, from chatbots to digital assistants, truly understand and serve African users.

Mobile-Led Innovation in Fintech, Gaming, and Everyday Life

The conference also spotlighted mobile-first innovation across fintech, entertainment, and gaming. In conversations with leaders from Visa, MTN, and GSMA, it became evident that Africa’s mobile ecosystem continues to drive engagement, commerce, and creativity.

GeoPoll’s own Gaming in Africa Report (2024) revealed that mobile dominates Africa’s gaming landscape, with 92 % of gamers using Google Play and 63 % making in-app purchases, many through mobile-money platforms. These insights reinforce MWC’s broader message: Africa’s digital future is mobile-first, data-driven, and youth-powered.

GeoPoll’s Role in Africa’s Digital Future

At GeoPoll, we sit at the intersection of data, technology, and social impact. Our proprietary solutions, from TuuCho, our always-on consumer-insights platform, to WhatsApp Research Communities (MROCs), and AI-driven Social and Speech Intelligence, empower organizations to understand audiences, test ideas, and monitor sentiment in real time.

Being part of MWC Kigali reaffirmed that Africa’s most transformative innovation will come not just from technology itself but from inclusive data that amplifies Africa’s voice.
That’s where GeoPoll continues to invest, in building the data infrastructure that powers decision-making and fuels AI innovation.

Looking Ahead

As AI, IoT, and mobile connectivity converge, Africa’s digital growth story is entering a bold new phase, one defined by intelligence, inclusion, and innovation at scale.

At GeoPoll, we’re proud to contribute to that story by providing the insights, tools, and data networks that help organizations turn algorithms into action.

John (JP) Murunga is GeoPoll’s Regional Director, Africa.

The post Africa’s Digital Future Unfolds at MWC Kigali: Reflections from GeoPoll appeared first on GeoPoll.

]]>
Kenya’s Financial Landscape Report https://www.geopoll.com/blog/kenyas-financial-landscape-report/ Fri, 14 Nov 2025 09:03:46 +0000 https://www.geopoll.com/?p=25364 Kenya’s financial landscape stands as one of the most dynamic in Africa, driven by rapid digitization, high mobile money adoption, and continued […]

The post Kenya’s Financial Landscape Report appeared first on GeoPoll.

]]>
Kenya’s financial landscape stands as one of the most dynamic in Africa, driven by rapid digitization, high mobile money adoption, and continued efforts toward financial inclusion. The country is globally recognized for the success of M-Pesa, which has transformed the way Kenyans send, receive, and store money since its launch in 2007. Today, mobile money platforms are used by over 90% of adults, enabling seamless payments, savings, and access to credit.

According to the FinAccess Household Survey 2024, 84.8% of Kenyan adults now have access to formal financial services, marking a significant milestone in inclusion. The Central Bank of Kenya’s Financial Sector Stability Report 2024 further notes the rising role of digital lending, with non-bank credit providers and mobile loan apps becoming key sources of short-term finance, though concerns remain over affordability, data privacy, and consumer protection.

This Kenya-focused study forms part of a broader Sub-Saharan Africa Financial Services and Usage Report, which examined evolving financial behaviors across multiple African markets. Powered by TuuCho; GeoPoll conducted the study via GeoPoll’s application and  mobile web platform, reaching a total of 2,500 respondents, offering a comprehensive snapshot of how Kenyans access, use, and perceive financial services, from mobile wallets to traditional banking and emerging credit solutions. By situating Kenya’s findings within the regional context, the report highlights both the country’s leadership in digital finance innovation and the ongoing need to balance accessibility with responsible lending and financial literacy.

Demographic Overview

The survey gathered responses from a diverse group of young Kenyans, with most aged between 25 and 34 years (52%). Males accounted for 64% of respondents and females 36%, with a majority living in urban areas (73%) compared to rural areas (27%). In terms of income, most respondents fall within lower to mid-income brackets, reflecting the importance of affordable financial solutions. About 34% earn between KES 10,000 and 35,000 per month, while 31% earn below KES 10,000. A smaller but growing middle-income segment, representing 15%, earns between KES 35,000 and 50,000 monthly.

Sources of Income

The data indicates that most Kenyans derive their income from formal employment and small businesses, reflecting a mixed but evolving labor landscape. A significant 37% of respondents earn their primary income through salaries or wages from formal employment, showing the continued importance of structured jobs, particularly in urban centers. The second-largest source of income is business profits or self-employment, reported by 21% of respondents, highlighting Kenya’s strong entrepreneurial culture and the role of micro, small, and medium enterprises in sustaining livelihoods. Casual or daily labor ranks third at 11%, pointing to a sizeable portion of the population engaged in informal or short-term work.

Financial Service Usage in Kenya

The findings reveal that mobile money platforms remain the dominant financial service in Kenya, reflecting their central role in everyday transactions and financial inclusion. About 67% of respondents reported using mobile money services such as M-Pesa, far surpassing all other financial channels. This demonstrates the continued integration of mobile finance into both personal and business activities across the country. The second most used service is bank accounts (including savings and checking), cited by 18% of respondents, showing that while traditional banking remains important, it lags behind mobile-based solutions in accessibility and usage. SACCOs and cooperatives follow distantly at 5%, indicating their niche but trusted role, particularly in rural and community-based financial systems. The comparatively low adoption of microfinance services (4%), digital lending apps (3%), and insurance services (1%) points to opportunities for growth in formal and digital finance beyond payments, especially in credit, savings, and risk protection products.

Mobile Money Usage in Kenya

Mobile money continues to define Kenya’s financial landscape, reaching near-universal adoption. According to the survey, an overwhelming 98% of respondents reported using mobile money services such as M-Pesa or Airtel Money, confirming its position as the country’s dominant financial tool. This near-total penetration reflects how mobile wallets have become deeply embedded in daily financial activity, bridging gaps in formal banking access and enabling real-time transactions for millions.

When asked about their main uses of mobile money, Kenyans demonstrated its versatility beyond simple transfers. The majority use it for sending (79%) and receiving money (78%), followed closely by paying for goods and services (73%) and settling bills (70%) such as electricity, water, and internet. Additionally, nearly half (49%) use mobile money for savings, while 32% rely on it for loans or credit, reflecting the expanding role of digital finance in meeting broader financial needs. This shows that mobile money has evolved from a payment platform into a multifunctional ecosystem supporting both transactional and financial management activities.

In terms of frequency of use, engagement is remarkably high, 49% of respondents use mobile money daily, while another 39% transact several times a day. Only a small minority use it weekly or less often. These patterns demonstrate how integral mobile money has become to everyday life in Kenya, facilitating everything from routine purchases to income management. The findings highlight a mature and highly active digital finance environment, where convenience, trust, and accessibility drive sustained adoption and frequent usage.

Bank Account Ownership and Usage in Kenya

Banking access in Kenya remains significant, though not as widespread or actively used as mobile money. The findings show that 83% of respondents have a bank account, while 17% do not. Among account holders, 40% maintain a savings account, 23% have a current or checking account, and 21% hold both types. This indicates that most users prioritize savings-based products, aligning with Kenya’s growing culture of financial prudence and long-term planning. However, the relatively high share of individuals without bank accounts highlights the continued importance of alternative financial systems such as mobile money and SACCOs.

In terms of frequency of bank use, activity levels are moderate to low. About 36% of respondents use their bank accounts rarely, while another 33% engage with them monthly. Only 22% access their accounts weekly, and 11% use them daily. This suggests that while many Kenyans maintain formal banking relationships, everyday transactions are far more likely to occur through mobile platforms, which offer greater convenience and accessibility for routine financial needs.

When asked about their main reasons for using bank accounts, respondents cited receiving income (35%) and saving money (35%) as the top purposes. Smaller proportions reported using banks to pay bills or school fees (8%), conduct business transactions (6%), or access credit or loans (4%). These findings show that banks remain trusted for secure deposits and salary handling, but are less integrated into the daily financial activities that mobile money now dominates. The data points to a hybrid financial environment where formal banking serves as a foundation for savings and income management, while digital tools drive everyday financial interactions.

Top Banks (% of Mentions)

Among the respondents, the top five preferred banks in Kenya are KCB Bank (32%), Equity Bank (29%), Co-operative Bank (11%), I&M Bank (3%), and Absa Bank (3%). The results show a strong preference for Kenyan-owned institutions, with KCB, Equity, and Co-operative Bank collectively commanding over 70% of respondents. Their dominance highlights the strength of homegrown banks that have built extensive networks and deep community trust, while I&M and Absa represent smaller but established players within the country’s diversified banking sector.

Borrowing Trends and Loan Sources in Kenya

The findings reveal a nearly even split in borrowing activity among Kenyan respondents. About 47% reported having taken a loan in the past 12 months, while 53% had not. This balance suggests that credit access is relatively widespread but still moderated by income levels, financial literacy, or risk aversion.

When asked about their sources of borrowing, mobile lending apps emerged as the most common option, used by 30% of respondents. Their popularity reflects the convenience and speed of digital credit solutions like M-Shwari, Tala, and Branch. Commercial banks followed at 24%, indicating that traditional financial institutions remain an important source of formal credit, particularly for salaried individuals. Other notable borrowing sources include family or friends (20%), SACCOs or cooperatives (15%), and government funds (15%), showing a blend of formal and informal mechanisms in Kenya’s credit landscape. A smaller share borrowed from microfinance institutions (15%) and informal moneylenders (9%), suggesting that while access to credit is broad, affordability and regulation remain ongoing challenges.

Regarding the main reasons for borrowing, emergencies (27%) topped the list, followed by business purposes (23%) and school or education fees (12%). These patterns highlight that borrowing in Kenya is largely driven by short-term needs and income-support activities, rather than asset acquisition or long-term investments. Fewer respondents cited borrowing for food (7%), household expenses (5%), or asset purchases (4%), reinforcing that loans are often used as financial buffers rather than tools for wealth creation.

Familiarity with Insurance Products

Most Kenyans demonstrate a solid awareness of insurance, with about 40% saying they are very familiar with different insurance products and providers. Another 33% are somewhat familiar, showing moderate understanding. However, around 28% have only heard of insurance or are not familiar at all, indicating that while awareness is widespread, deeper understanding remains limited across portions of the population.

Insurance Uptake and Coverage Types

Nearly half of respondents, 48%, reported having taken an insurance policy, while 53% said they have not. Among those insured, health insurance dominates at 48%, followed by life insurance at 17% and motor insurance at 14%. Around 36% of respondents currently have no insurance coverage, revealing significant opportunity for growth in other categories such as property, agricultural, and home insurance.

Barriers to Insurance Uptake

The main challenge limiting insurance adoption is affordability, with about 41% citing high premiums as the biggest deterrent. Another 24% pointed to lack of clear information or understanding, while 14% mentioned limited product availability. Roughly 13% said they do not see the need for insurance. These findings highlight the need for more affordable, transparent, and accessible insurance options tailored to Kenyan consumers.

Trust in Insurance Companies

Trust levels in insurance companies are moderate. About 44% of Kenyans have mixed feelings, 24% are cautious or skeptical, and 21% fully trust insurers. Only 12% say they do not trust them at all. These results show that while awareness is growing, confidence remains limited, highlighting the need for insurers to improve transparency and build stronger customer relationships.

Challenges, Barriers, and Satisfaction with Financial Services

High fees remain a major concern across both mobile money and formal financial services, with 34% of respondents citing them as the main challenge in fintech use and 46% identifying them as the biggest barrier to accessing formal financial systems. Other significant issues include network downtime at 28% and fraud or security concerns at 25%, while customer service and digital literacy challenges were reported by fewer users.

Despite these challenges, overall satisfaction with financial services is fairly positive. About 41% of respondents reported being satisfied and 14% very satisfied, while 38% were neutral. Only a small proportion, roughly 8%, expressed dissatisfaction. This suggests that although costs and service reliability are key pain points, most users acknowledge some level of satisfaction with available financial services.

When asked about improvements that would encourage more frequent use, nearly 45% of respondents called for lower fees. Better customer service and easier access to branches or agents were also seen as important by 20% and 19%, respectively. These insights highlight a clear demand for affordability, convenience, and improved service delivery to enhance engagement with financial products in Kenya.

Financial Constraints and Major Life Decisions

A large majority of respondents, 71%, reported postponing major life plans such as marriage, education, or starting a business due to financial reasons. Only 29% said they had not delayed any major plans. This indicates that financial challenges remain a significant barrier to personal progress for many Kenyans, affecting long-term goals and overall economic well-being.

Consumer Spending Adjustments

A significant 79% of respondents reported changing a product and opting for a cheaper alternative, while only 22% said they had not. This shows that most Kenyans are making cost-conscious decisions, likely influenced by economic pressures and the rising cost of living, as they prioritize affordability over brand or quality preferences.

Conclusion

Kenya’s financial landscape continues to set the pace for digital innovation in Africa, yet clear gaps remain between access, affordability, and depth of use. With 84.8% of adults now financially included and mobile money reaching 98% penetration, Kenya has achieved remarkable progress in expanding access to financial tools. However, challenges persist: 41% of respondents cite high fees as the main barrier to insurance and financial service uptake, while 44% express only moderate trust in insurance providers.

Financial strain remains widespread, with 71% of Kenyans delaying major life decisions due to money constraints and 79% opting for cheaper products to cope with rising costs. Despite these pressures, 55% of users report being satisfied or very satisfied with available financial services, evidence of a population that remains resilient, adaptive, and optimistic. Moving forward, Kenya’s financial ecosystem must prioritize affordability, transparency, and responsible innovation to ensure that its digital success story translates into sustainable financial well-being for all.

Methodology/About this Survey

This Exclusive Survey was powered by GeoPoll’s AI platform; Tuucho run via the GeoPoll mobile application and Mobile web in Kenya, the sample size was 2,500, composed of random users between 18 and 50. Since the survey was randomly distributed to an affluent audience the results are slightly skewed towards younger respondents.

These insights highlight not only the evolving nature of Kenya’s financial landscape, but also the power of GeoPoll in uncovering meaningful, data-driven narratives across diverse populations. Through its robust mobile-based survey technology and extensive reach across emerging markets, GeoPoll delivers fast, reliable, and actionable financial data that helps organizations, policymakers, and researchers understand consumer behavior, financial inclusion, and economic trends in real time. As digital finance continues to transform access and usage across Africa, GeoPoll remains at the forefront, bridging the gap between people and insights, and enabling smarter decisions through a deeper understanding of financial realities.

Please get in touch with us to get more details about our capabilities, explore more on various topics in Africa, Asia, and Latin America.

The post Kenya’s Financial Landscape Report appeared first on GeoPoll.

]]>
GeoPoll Launches Senselytic to Bring AI-Powered Qualitative Insight to Quantitative Surveys https://www.geopoll.com/blog/senselytic-launch/ Wed, 05 Nov 2025 11:35:40 +0000 https://www.geopoll.com/?p=25345 GeoPoll is pleased to announce the launch of GeoPoll Senselytic, a new AI-powered capability designed to extend traditional quantitative surveys with automated […]

The post GeoPoll Launches Senselytic to Bring AI-Powered Qualitative Insight to Quantitative Surveys appeared first on GeoPoll.

]]>
GeoPoll is pleased to announce the launch of GeoPoll Senselytic, a new AI-powered capability designed to extend traditional quantitative surveys with automated qualitative analysis. Senselytic enables researchers to capture and interpret unstructured responses from survey participants, providing richer, more contextual insights alongside standard quantitative data.

Traditional survey methods are highly effective in capturing structured information at scale, but often miss the nuance and emotion behind responses. Senselytic bridges this gap by analyzing natural conversations to reveal themes, sentiment, and patterns in real time. Combined with GeoPoll’s Speech Analytics AI Engine, Senselytic allows clients to uncover the why behind the data, faster and more cost-effectively than conventional qualitative methods.

GeoPoll Senselytic qual at scale

Expanding What Surveys Can Deliver

Senselytic is fully integrated into GeoPoll’s existing data collection infrastructure, including CATI, CAPI, SMS, and online platforms. During standard interviews, enumerators or AI-powered workflows for self-administered surveys can include short, open-ended prompts, enabling respondents to elaborate freely on key topics and naturally share context, emotion, and experience.

Responses are automatically processed by GeoPoll’s proprietary AI models, which transcribe and analyze patterns across thousands of interviews, languages, and contexts. The system identifies recurring themes and emotional tones, producing structured outputs that complement quantitative findings. This approach allows organizations to understand both what people think and why they think it, without conducting separate qualitative studies or extending fieldwork timelines.

Senselytic builds on GeoPoll’s decades of experience in large-scale research across emerging markets. It leverages GeoPoll’s multilingual infrastructure, local expertise, and methodological rigor while adding an AI layer that enables faster, deeper, and more consistent qualitative insight.

“We see Senselytic as a natural evolution of how GeoPoll delivers data and understanding,” said Nicholas Becker, CEO at GeoPoll. “It allows us and our clients to capture not only measurable outcomes but also the human context behind them – something that’s been missing in traditional large-scale surveys.”

Early Applications

In recent studies, GeoPoll Senselytic has already demonstrated its potential to transform research outcomes across both development and commercial sectors, generating more actionable insights from the same data collection investment.

  • In a regional food security assessment in Latin America for an international development organization, open-ended responses captured through Senselytic revealed new insights into household coping strategies and trust in aid programs, in the words of the citizens, context that traditional metrics alone would have missed.
  • In a consumer perception study in East Africa for a global consultant, the AI analysis of open-ended conversations with respondents identified emotional drivers, such as aspiration, authenticity, and affordability, helping the client refine its brand positioning and communications strategy.

Deeper Insight at Scale

Senselytic combines the structure of quantitative surveys with the depth of qualitative analysis to provide:

  • A more complete understanding of respondent motivations and experiences
  • Faster turnaround compared to manual qualitative analysis
  • Reliable, bias-free results powered by AI consistency
  • Seamless compatibility with existing GeoPoll workflows and reporting systems

Availability

Senselytic is now available as an optional extension to all GeoPoll survey methodologies. For more information or to request a demonstration, visit www.geopoll.com/senselytic

The post GeoPoll Launches Senselytic to Bring AI-Powered Qualitative Insight to Quantitative Surveys appeared first on GeoPoll.

]]>
Celebrating 11 Incredible Years with Peter “Pete” Omolo at GeoPoll https://www.geopoll.com/blog/celebrating-11-incredible-years-with-peter-pete-omolo-at-geopoll/ Wed, 05 Nov 2025 07:08:20 +0000 https://www.geopoll.com/?p=25353 At GeoPoll, we believe that our strength lies in the people who make things happen behind the scenes every single day. Today, […]

The post Celebrating 11 Incredible Years with Peter “Pete” Omolo at GeoPoll appeared first on GeoPoll.

]]>
At GeoPoll, we believe that our strength lies in the people who make things happen behind the scenes every single day. Today, we’re celebrating one such remarkable individual, Peter Omolo, affectionately known as Pete, who marks 11 years of dedication, innovation, and excellence with GeoPoll!

As our Senior Network & Systems Engineer, Technology Operations, Pete has been the backbone of our technical infrastructure, ensuring that our global systems run smoothly, securely, and efficiently. His deep technical expertise and unwavering commitment have played a vital role in keeping our operations reliable, enabling GeoPoll to deliver high-quality data to clients and partners across the world.

Over the years, Pete’s contributions have gone far beyond systems and servers. His calm problem-solving approach, teamwork, and mentorship have inspired those around him and strengthened our entire tech team. Whether it’s resolving critical network issues or implementing cutting-edge solutions, Pete’s dedication ensures GeoPoll stays connected and operational around the clock.

We’re incredibly grateful for over a decade of innovation, reliability, and leadership. Thank you, Pete, for your hard work and for embodying the spirit of GeoPoll every day.

Here’s to many more years of success, growth, and impact together!

The post Celebrating 11 Incredible Years with Peter “Pete” Omolo at GeoPoll appeared first on GeoPoll.

]]>