Partnership for Food Safety Education

FightBAC!

  • Food Safety Basics
    • The Core Four Practices
    • Featured Resources
  • Food Poisoning
    • About Foodborne Illness
    • Foodborne Pathogens
    • Causes & Symptoms
    • Food Safety Glossary
  • Food Safety Education
    • National Food Safety Education Month
    • Safe Poultry Handling
    • Food Safety Mythbusters
    • Prep Yourself: Delivery Food Is on the Way
    • The Story of Your Dinner
    • Flour & Home Food Safety
    • Safe Produce
    • Recall Basics
    • Go 40 or Below
    • Safety in All Seasons
  • K-12 Education
    • Curricula & Programs
      • Grades K – 3
      • Grades 3 – 5
      • Grades 4 – 8
      • Grades 9 – 12
    • Hands On
    • Kids Games & Activities
    • School Lunches
  • Child Care
    • Babies & Toddlers
    • Child Care Training
    • Kids Games & Activities
  • Safe Recipes
    • Safe Recipe Style Guide
    • Safe Recipe Activity for Middle School
    • Cookbooks
    • Appetizers
    • Side Dishes
    • Entrees
    • Desserts
  • Free Resources
    • Recorded Webinars
    • World Food Safety Day
    • Global Handwashing Day
    • Recursos en español
    • Coronavirus Resources
    • Evaluation Toolkit
  • About Us
    • Partnership & History
    • Board of Directors
    • Who Is Involved
    • PFSE Team
    • Brand Assets
    • BAC Fighter Ambassadors
    • Job Openings
    • Contact Us
  • Get Involved
  • Events
    • Upcoming Events
    • 2023 Conference
  • News & Blogs

Data Collection

About
Overview & Importance of Evaluation
Formative Program Planning
Mapping the Intervention & Evaluation
Selecting an Evaluation Design
Data Collection
Data Analysis
Return to Start
About
Overview & Importance of Evaluation
Formative Program Planning
Mapping the Intervention & Evaluation
Selecting an Evaluation Design
Data Collection
Data Analysis
Return to Start
Download the Full Guide PDF
Access the Toolkit Resources

Data Collection

How to collect data

In addition to planning and selecting your evaluation design, you also need to figure out how to collect evaluation data. There are two main types of data: quantitative and qualitative.

Quantitative data is quantifiable and numerical and is particularly useful when trying to establish causality between an independent variable (program or activity) and dependent variable (food safety knowledge, attitudes, or behavior etc.) or if you want to obtain some sort of score or rating on a topic, such as a knowledge score. It is best to use quantitative methods when the subject topic is well researched and when you have a valid and reliable data collection tool. This method is usually considered to be more objective and less biased than qualitative methods. In addition, it can be easier to demonstrate the validity and reliability of quantitative data than with qualitative data.

Qualitative data is generally non-numerical and more exploratory in nature. It is used to identify important themes related to a particular topic and to gather detailed insight into more complex issues [11]. Qualitative data collection methods can provide valuable information about personal thoughts, experiences, feelings, and interpretations that can often be overlooked when using quantitative methods. It is a particularly good method to use when there is little is known about research topic.

Below are different data collection methods you could use to for your evaluation. Consider the benefits and limitations of each option in relation to your resources, the purpose of your evaluation, and your target audience.

Collection Method Description Benefits Limitations
Survey/ Questionnaire
• Self-reported.
• Usually a series of questions that can be provided online or on paper to collect numerical data.
• It is important to use a survey that has been pre-tested and proven to be reliable and valid.
• Good for collecting quantitative data that can be statically analyzed.
• Good for assessing food safety knowledge.
• Inexpensive and requires less time, staff training, and support.
• Can be a good method to use to establish causality between an independent variable (program or activity) and dependent variable (food safety knowledge, attitudes, behavior etc.).
• May overlook deeper personal meanings attitudes or perceptions related to food handling and why people think or behave the way they do.
• Risk of self-reported biases.
Focus groups
• Self-reported/descriptive.
• Usually consists of a group of 8 to 12 individuals that come together to answer questions and have a discussion on pre-determined topics as a collective. A facilitator is usually present to facilitate dialogue and guide the discussion. Descriptive data is collected and later analyzed.
• It can be helpful to keep some questions open-ended in order to gather information that is relevant and important but that may not have been considered when focus group questions were developed.
• Focus group discussions are often recorded and later transcribed. It can also be beneficial for the facilitator to take notes on relevant nonverbal expressions to supplement recording transcriptions.
• Most common method for collecting qualitative food safety information [14].
• Good for collecting qualitative data.
• Can provide valuable information about thoughts, attitudes, perceptions, experiences, values, personal interpretations, and meanings that can often be overlooked when using quantitative methods.
• Useful to explore topics on which little is known.
• Interactions between group members may provide valuable insight on the topic, which can be overlooked when focusing only on individuals.
• Can be a less expensive and less time consuming way to collect qualitative data.
• Need a skillful facilitator to encourage a productive discussion.
• Can sometimes face scheduling difficulties with finding a time suitable for all participants to meet [17].
• Analysis can be time consuming.
• May be difficult to find participants that are willing to openly share their personal thoughts and feelings in a group setting.
• One or a few individuals may dominate the discussion so it is important for the facilitator to encourage equal participation.
• Risk of self-reported biases.
One-on-one interview
• Self-reported.
• Interviewer usually meets one-on-one with the interviewee either in person or via phone to ask pre-determined questions. Usually takes longer than when providing a written questionnaire.
• Can be good to develop an interview training and script for all interviewers for consistency.
• May be helpful to keep some questions open ended in order to gather information that may be relevant and important but that may not have been considered when focus group questions were developed.
• Structured interviews can be good for collecting quantitative data with additional insight into why participants respond the way they do.
• Good for collecting qualitative data. • Useful method to explore topics on which little is known.
• Can provide valuable information about thoughts, attitudes, perceptions, experiences, values, personal interpretations, and meanings that can often be overlooked when using quantitative methods [17].
• May be difficult to find participants that are willing to openly share when one on one.
• Can require a skillful interviewer to encourage a productive discussion or response to questions, particularly with more sensitive topics.
• Risk of self-reported biases.
Household audit
• Observed behavior.
• An audit tool is generally used to visually examine and score households based on factors related to safe food handling practices. For example, an audit can examine resources needed for proper cleaning, cleanliness of the kitchen, or storage of foods [14].
• It is important to use an audit tool that has been pre-tested and proven to be reliable and valid.
• Observing food safety behaviors might provide more accurate and objective information than when relying on self-reported information [14].
• Can be a good complement to other forms of data collected, such as self-reported data.
• Participants may not be willing or feel comfortable allowing auditors to come into their homes.
• Participants may prep their home before the audit, making the household environment less realistic (social desirability).
• An audit score may not provide the entire picture for why participants scored the way they did.
• Can be subject to rater bias if some auditors score more lenient or harsher than others.
Observations in model or consumer home or kitchen
• Observed behavior.
• Participants are observed practicing a behavior or carrying out a specified task in a model or consumer home or kitchen.
• Observing food handling behaviors, might provide more accurate and objective information than when relying on self-reported information [14].
• Can be a good complement to other forms of data collected, such as self-reported data.
• Can be difficult to implement, time consuming, and expensive.
• May be difficult to find participants willing to be observed when demonstrating food safety behaviors.
Collect microbial data in homes or kitchens
• Microbial samples are collected in the participants’ homes or kitchens and then taken to a lab and analyzed.
• Provides quantifiable data and information that cannot be collected via other quantitative and qualitative methods.
• Can be a more objective method of collecting food safety information.
• Can be a good complement to other forms of data collected, such as self-reported data.
• Does not directly provide information on food safety behavior or KASA.
• Can provide insight on the presence and persistence of pathogens in domestic kitchens that can be valuable in developing recommendations for safe food handling practices at home [14].
• Participants may not be willing or feel comfortable allowing data collectors to come into their homes.

Mixed methods

Consider a mixed methods approach and using a combination of data collection methods to evaluate your program and gather insight into food safety knowledge, attitudes, and behavior. For example, collecting data via a questionnaire to assess food safety behavior and digging deeper into the topic via focus groups can provide a more well-rounded picture of outcome changes instead of solely relying on data from the questionnaire. Using qualitative methods to supplement quantitative methods can provide more background information on the topic and a greater understanding about why individuals responded the way they did quantitatively. Using mixed methods can also help you identify inconsistencies or inaccuracies when having to rely on self-reported data.

Self-reported data

When collecting self-reported data on food safety behaviors it is important to minimize potential threats to validity, such as recall and social desirability bias, to ensure that the data collected is reliable, consistent, and true. If possible, consider also using an observational method to collect the same behavior data in order to compare and analyze potential discrepancies between self-reported and observed behaviors. Below are examples of how studies have used mixed methods to gather food safety data:

  • To identify sanitation and food handling of ‘Chicken and Salad’ in Puerto Rican households, food and kitchen surface microbial samples were collected at different stages of food preparation. In addition, household observations were collected to observe storing, thawing, handling, and cooking practices. Observations and microbiological results were then compared to understand the impact of different food handling practices and risk of microbial contamination [13].
  • To learn about how consumers prepare and cook ground beef for hamburgers, video footage of 199 volunteers in Northern California were analyzed for compliance with recommended practices. Following the filming of each session, questionnaires about food safety attitudes and knowledge were provided to each volunteer. When describing findings from the video observations, researchers provided further insight into why participants engaged in specific practices in the videos by providing personal statements [22].
  • To explore home food safety knowledge, practices, and risk perception among Mexican-Americans, ten focus groups with 78 participants were conducted in New York and Texas. Focus group findings were then used to inform a probability-based survey that was administered to 468 Mexican-Americans who cook for their families. Findings from the focus groups and online surveys consistently identified several food safety concerns such as low use of thermometers, knowledge gaps about cross-contamination, and unsafe thawing practices [21].

Data collection tools

When thinking about what data collection tool to use it is important to do some research to find out if any tools have already been developed, validated, and used to address a topic similar to yours, with a similar population. Instead of starting from scratch consider using existing tools or adapting them to suit your needs. Below are examples of existing food safety tools that have been tested for validity:

  • Audit tool for domestic kitchens [3,6]
  • Food safety psychosocial questionnaire for young adults [7]
  • Stages of change questions to assess consumer readiness to use a food thermometer when cooking small cuts of meat [23]
  • Food safety knowledge and attitude scales for consumer food safety education [20]
  • Checklist for observing food safety behavior for sample young adults [5]
  • Consumer food behavior questionnaire [19]

Instrument/survey development

If you are developing your own instrument and questions for the evaluation, there are many things you should keep in mind such as including demographic questions, the length of your survey, health literacy and cultural sensitivity, and more. Below is a list of helpful tips to guide you throughout the instrument development process:

  • Include demographic questions in your survey. Responses can later be analyzed to find out how different characteristics influence food safety knowledge, attitudes, and behaviors etc. Gathering demographic information can also help you figure out who your program works best for. Alternately, you might find out that your program doesn’t work well for a group of individuals and you may need to adjust the program slightly for a certain group. You could also find out that you need to provide different versions of your activities or materials for different groups. For example, you may find that some messages are resonating well with females but not as well with males and you need to review and adjust materials given out to male participants. Collecting demographic information can also be helpful when analyzing data because you can adjust for certain characteristics to reduce threats to validity.

     

    • Demographic information you may want to ask for include: age, gender, ethnicity, education level, number of individuals or children in household, and income. Some questions might be more sensitive in nature, so it may be helpful to restate that responses are confidential and anonymous (if that is the case) before asking sensitive questions such as income. It may also be beneficial to leave sensitive demographic questions towards the end of the assessment to allow participants to warm up and feel more comfortable before having to respond to more personal questions.

  • When using quantitative data collection methods make sure you use measures that are sensitive to change and can provide you with sufficient and useful information on the topic. For example, using a 5-point Likert scale can usually provide more valuable information on a topic than when using dichotomous variables such as yes or no options [9].
  • Provide the option “I don’t know” so participants are not forced to pick a response that might not be true for them.

Sample questions using a Likert scale and an “I don’t know” response option: The next few questions are about how confident you feel about carrying out different food handling practices to prevent foodborne illness. Please circle how much you agree with the next set of statements.

sample likert scale
  • Consider the length of the survey and the time it takes a person to complete it. Keep surveys administered by an interviewer to 15-20 minutes and self-administrated surveys to 5-10 minutes [9].
  • When possible, consider using previously validated surveys or survey items and scales such as those provided in the “Data collection tools” section. Using a collection of questions or scales that have been previously validated can help save you time and reduce threats to internal validity.
  • When assessing food safety knowledge, use learning objectives to develop questions. For example, if the learning objective of a workshop or lesson curriculum is “participants will understand severity and susceptibility of foodborne illness” then start there to identify corresponding survey questions such as, “what are some of the consequences of foodborne illness?” or “what populations are most vulnerable to foodborne illness?”
  • Remember health literacy and cultural sensitivity. Make sure questions are clear, direct, and easy to understand, and that you take into account reading levels of participants. When thinking about cultural sensitivity, consider data collection methods that are culturally appropriate, applicable to the target audience, and sensitive to cultural norms. For example, some populations might be more receptive to female interviewers or feel more comfortable being interviewed by individuals from their own community or that speak their native language. Take time to learn and understand what interview strategies will work best for your target audience.
  • Pilot test your survey instrument. Adjust and refine your tool based on feedback and reactions.  Consider using cognitive interviews and the think aloud method when pilot testing [9]. Cognitive interviewing is a technique that allows individuals to verbalize their feelings and thought processes [2]. Find out whether or not, and how, participants comprehend the questions, are able to retrieve information for their answer, judge whether or not their information is an accurate or relevant answer, and respond to the question [10,12,16,18,26]. Incorporate open ended probes such as “what thoughts are going through your mind right now?” or “what could we do to improve this question?” to gather feedback and reactions to the questions [9].
  • Test for reliability to ensure your tool will provide consistent responses and results when used repeatedly. For example, assess reliability by administering the same test to the same individuals over a period of time to ensure consistency in the results.
  • Test for validity to ensure that your tool actual measures what it is supposed to measure. There are four types of internal validity usually measured when creating an instrument [15].

Types of validity –

  • Face validity: the degree to which an instrument appears to measure the concepts or constructs you wish to measure. This can be the weakest form of validity because it is subjective and not evidence based. You could assess face validity with a group of stakeholders or representatives of the target audience by asking if the group finds that the questions are relevant and address the constructs or topics you are interested in.
  • Content validity: is the degree to which measures in the instrument contain a reasonable number of attributes from the concept you wish to gather information on. This can be measured by having a panel of experienced judges identify all the attributes of a concept and assess how representative the measure is of the concept, such as by using as 1-9 scale with one being extremely inappropriate and 9 being extremely appropriate.
  • Criterion validity: is the degree to which a measure can accurately predict the dependent variables or the outcomes you want to gather information about. It can be found by comparing a measure with other measures to find how much they correlate with each other. For example, you could measure how much an individual’s food safety skill score correlates with his or her actual ability to implement safe food handling practices. A measure has criterion validity if a high correlation is found with at least one of the criterion measures.
  • Construct validity: refers to how much an instrument is able to assess the theoretical construct or concept, such as self-efficacy, it is meant to measure. Construct validity is present when measures of an instrument are consistent with the theoretical hypotheses of the concept you want to gather information on [8].
  • Provide training for inexperienced interviewees and opportunities for them to practice and role play with representatives from the target audience. Prepare a data collection manual that describes procedures and provides information such as background on the program, recruitment methods, and data collection schedules, procedures, materials, and submission requirements [9]. Provide each interviewer with his or her own manual and go over its contents during training. More details on what to include in a data collection manual can be found on page 45 of the USDA’s Addressing the Challenges of Conducting Effective Supplemental Nutrition Assistance Program Education (SNAP-Ed) Evaluations: A Step-by-Step Guide: http://www.fns.usda.gov/sites/default/files/SNAPEDWaveII_Guide.pdf
  • Include an interview script to introduce and conclude the survey. Also include instructions and explanations that will help guide interviewee’s and participants through the questions.
  • When introducing the survey, provide an explanation on why it is important and how the participants’ feedback can help their community.

Sample: “Thank you for participating in this interview. Your feedback will help us learn more about food safety education and how we can improve our program to best serve your community and reduce foodborne illness. The purpose of this interview is to find out what you know about food safety, how you feel about food handling practices, and how you store and prepare foods at home. Remember, there are no right or wrong answers, so please feel free to say anything that comes to mind.”

  • Make sure questions are relevant and address your evaluation objectives.
  • Think about whether you are assessing inputs, outputs, and outcomes when developing evaluation questions. Below are examples of input, output, and outcome questions to evaluate a social media campaign on Twitter [4]:
    • Input: How many pilot tested Twitter posts have been developed?
    • Output: How many messages were posted throughout the campaign (October-December)? How many Tweets were retweeted? How many Tweets were clicked as a favorite? How many new followers were gained?
    • Outcome: How many teenagers in the county learned about the new Germ Wars campaign?

"How They Did It" in an orange box.To evaluate a health education initiative in Georgia elementary schools, pre- and post-test questionnaires were developed to evaluate the program’s effectiveness in increasing knowledge about proper handwashing. Keeping the target audience in mind, evaluation questions were developed to use a similar format to those often used in elementary schools.  Extension food safety educators and child development specialists examined evaluation questions for content validity and readability and to ensure they were appropriate for the age- and grade-level of the sample. The University of Georgia Institutional Review Board approved all evaluation methods and instruments. Questionnaires were distributed in test packets students were accustomed to using when taking standardized tests. Evaluation data, collected from 5,462 youth, indicated that the program materials were effective in increasing knowledge about handwashing.

Harrison, J. (2012). Teaching children to wash their hands – wash your paws, Georgia! Handwashing education initiative. Food Protection Trends. 32(3), 116-123.

Ways to administer a questionnaire

Ways to administer a questionnaire Benefits Limitations
Mail
• Participants might find it more convenient to have the ability to take the survey at their own pace and time and in their own home.
• Reduces the need for participants to travel and can avoid transportation challenges.
• Does not require interviewers, which can save time and less staff support.
• Need a list of mailing addresses.
• Printing and mailing may be costly.
• Unable to track or confirm whether surveys were actually received.
• Participants might ignore the mailed questionnaire and not respond.
• Cannot track how long it takes participants to complete the survey.
• Can miss visual cues and reactions that might be valuable and informative.
• Participants might go online or ask someone for assistance with answering questions and there is no way to track this.
• Participants might not complete and mail back surveys in a timely fashion.
Email or Web-based Communication
• Can be a convenient and a good option for a tech savvy audience.
• Participants might find it more convenient to have the ability to take the survey at their own pace and time and in their own home.
• Reduces the need for participants to travel and can avoid transportation challenges.
• Low cost to set up and maintain.
• Easy to administer and requires minimal on the ground staff support.
• Ability to track percentage of emails that are viewed and monitor status of the survey using programs such as Qualtrics (https://www.qualtrics.com/) or Survey Monkey (https://www.surveymonkey.com/).
• Can reduce time needed to input survey responses into a spreadsheet as survey programs will provide that service.
• Need to have a list of emails or be able to contact participants to send the survey or a link to the survey.
• Need to ensure participants are comfortable taking online surveys (may not be best for older populations).
• Need to ensure participants have reliable access to Internet and a computer or smart phone.
• Participants might go online or ask someone for assistance with answering questions and there is no way to track this.
• Can miss visual cues and reactions that might be valuable and informative.
In person group-administered
• Participants might feel more comfortable taking the survey in a group setting than one-on-one.
• Can take up less time and require fewer interviewers by implementing all at once.
• Can be easy to implement following a program activity such as workshop, training, or event, when all participants are in the same location at the same time.
• Participants can share concerns or ask questions in real time.
• Allows for a more personal experience and interviewer can document visual cues and reactions.
• Need to be able to find a time and location where participants are all together – may be difficult if the program does not already provide opportunities for this to take place.
• Can require more resources (staff, time, and transportation).
In person one-on-one
• The target audience may prefer more personal face-to-face interactions.
• Allows for a more personal experience and interviewer can document visual cues and reactions.
• Participants can share concerns or ask questions in real time.
• If going door-to-door – participants may not feel comfortable allowing a stranger into their home or open their door for someone they don’t know.
• Can require more resources (staff, time, and transportation).
• Can face scheduling difficulties.
Phone
• Participants might find it more convenient to be interviewed in their own home or any location of their choice.
• Can be easier to schedule.
• Reduces the need for participants to travel and can avoid transportation challenges.
• Participants can share concerns or ask questions in real time.
• Need a list of phone numbers.
• Must ensure participants have access to phones.
• Participants may not feel comfortable answering an unfamiliar number.
• Can miss visual cues and reactions that might be valuable and informative.

Recruitment and Retention

Another factor that contributes to the success of an evaluation is participation. Below are some tips to help you recruit and retain participants:

  • Incentives, Incentives, Incentives! Provide incentives to participants who complete the evaluation. Do this each time you administer an assessment (at the pre-test and the post-test). Emphasize the incentive opportunity as you recruit and advertise. Involve members from the target audience when selecting the incentive (gift card, discount, coupon, or freebies) to make sure it is something that people actually want and will motivate them to participate.
  • Use key informants or individuals from the target audience to help with the recruitment process.
  • Be flexible when scheduling and plan around the participants’ availability.
  • If possible provide services such as transportation, refreshments, or day care.
  • Be open and honest when explaining the purpose of the interview. Keep it short and let people know the process won’t take more than X amount of their time.
  • Always be respectful, friendly, and maintain a good reputation in the community or with your target audience. Listen and be attentive to questions or concerns. Creating and maintaining a good reputation can help ensure that people are willing to work with you for future opportunities as well.
  • Say thank you! Always thank participants for their time and valuable feedback.
  • Don’t lose touch with participants if you need to do a follow-up test. Ask for the best way to reach them and get in touch in advance to inform them of the next survey time.
  • Follow through with any promises or commitments you make to participants. For example, if you advertise a specific incentive as a thank you gift, make sure the same incentive is provided to participants. Or, if you tell participants they will be contacted within a week with an answer to their question or to follow up on the evaluation process, make sure it is done so within the promised time frame.
  • Keep participants in the loop. Let them know if or when you will be sharing evaluation findings. Consider providing an open presentation of the final data for any interested participants and allow them to invite friends or family. This can help the target audience feel more actively involved in the process and can demonstrate how important and valuable their feedback is for the program.

Ethics

Throughout the evaluation, and even the needs assessment, it is important that you think about protecting the rights of the participants. This is particularly significant when interacting with vulnerable populations which include children under 18, prisoners, pregnant women, or anyone who is at risk of being coerced. Make sure that you are sensitive to the culture, needs, and rights of the target audience throughout the program implementation and evaluation, instead of focusing solely on evaluation or research goals.

Consider using a community-based participatory research approach to your program and evaluation. This approach involves forming collaborative partnerships between researchers and community members to ensure equitable involvement throughout the process. This can be a great way to empower your target audience to be active participants throughout the needs assessment, implementation of the program, and the evaluation. A community-based participatory research approach can also help you be aware of any ethical concerns throughout the program and evaluation and learn about how to best address any potential challenges. Working closely with your target audience as an equal partner can also build valuable trusting relationships that are beneficial to both parties and can foster co-learning [25].

Note: It is important to ensure confidentially of all the information that is collected from participants, particularly if confidentially is promised to participants during recruitment. When planning your evaluation think about what steps must be taken or systems need to be in place to ensure confidentially throughout the data collection and analysis process.

Internal Review Board

The U.S Department of Health and Human Resources defines research as “a systematic investigation, including research development, testing and evaluation, designed to develop or contribute to generalizable knowledge” [24]. If your evaluation research fits this definition and if you plan on sharing or publishing evaluation findings as generalizable knowledge then you will need to apply for Internal Review Board (IRB) approval [17,24]. You will also need approval if you are receiving any kind of Federal funding. The purpose of an IRB review is to ensure the protection of the rights of human subjects and participants in research. To obtain IRB approval you will need to submit an application to a local or private IRB committee and demonstrate that you will be following Federal guidelines, such as those related to research ethics and informed consent.

The length of time it takes to obtain approval generally depends on factors such as the sensitivity of the topic, the target audience, and the level of risk involved with the research. There are 3 types of internal review you may be eligible for [17]:

  1. Exempt – No risk or less than minimal risk to participants
  2. Expedited – Minimal risk to participants
  3. Full review – More than minimal risk to participants

Informed consent

Whether or not you need to obtain informed consent depends on how you plan to use the evaluation data and the requirements of your organization and/or the program or evaluation funders [17]. If you are required to obtain informed consent, here’s what you need to include in the informed consent form [17]:

GRAPH Informed Consent Requirements

Consider the following best practices when obtaining informed consent to make sure participants fully understand their rights and the information they are consenting to [1]:

  • Recognize the importance of time – do not make the informed consent process too long.
  • Train staff on the importance of informed consent.
  • View and treat participants as part of the decision making process.
  • Consider your audience: tailor the informed consent process to address cultural differences, health literacy levels, language needs, and demographic factors.
  • Use plain and simple language and provide information at an 8th grade reading level or below.
  • Think about using alternative methods to convey information such as video, visual handouts, or PowerPoint.
  • Assess and confirm comprehension by using the “teach back” or “teach to goal” method. This involves participants saying back to you the information you shared with them until they demonstrated that they fully understand the information.

Health Insurance Portability and Accountability Act

You must also ensure that you do not violate any Health Insurance Portability and Accountability Act (HIPPA) laws when conducting any research or your evaluation [17]. HIPPA protects an individual’s right to keep their health care information private. Depending on the information collected the evaluation, you may need participants to sign a form providing permission for you to share their medical information [17]. HIPPA regulations generally apply to healthcare organizations that provide medical services, which might not always be applicable for consumer food safety education programs [17]. However, it is important to keep HIPPA in mind if you plan to ask questions related to the health status of participants or the kinds of health services they have received.

In Summary,

when thinking about data collection and how to apply what you learned in this chapter to your program you may want to ask:

  • How will evaluation data be collected (e.g. questionnaire, focus group, or observations)? What are the benefits and limitations of this data collection method?
  • Can mixed methods be used in the evaluation?
  • Do any evaluation instruments already exist that could be used or adapted for the evaluation?
  • If a new evaluation instrument will be developed –
    • Will demographic questions be included?
    • How will the instrument be pilot tested?
    • How will health literacy and cultural sensitivity be taken into account?
    • How long with the survey be?
    • What will I ensure validity and reliability of the tool?
  • If administering a questionnaire – how will it be distributed? What are the benefits and limitations of this method?
  • What will I do to recruit and retain participants?
  • When do I need to submit the IRB application by? What type of internal review is my evaluation eligible for?
  • What will I do to ensure confidentially of information collected?
  • What will I do to ensure program evaluation methods are ethical?

References

  1. Aldoory, L., Ryan, K.B., & Rouhani, A. (2014). Best practices and new models of health literacy for informed consent: review of the impact of informed consent regulations on health literate communications. Institute of Medicine. Retrieved from: http://www.nationalacademies.org/hmd/~/media/Files/Activity%20Files/PublicHealth/HealthLiteracy/Commissioned-Papers/Informed_Consent_HealthLit.pdf
  2. Beatty, P. C., & Willis, G. B. (2007). Research synthesis: The practice of cognitive interviewing. Public Opinion Quarterly, 71(2), 287-311.
  3. Borrusso P, Quinlan JJ. (2013) Development and Piloting of a Food Safety Audit Tool for the Domestic Environment. Foods, 2(4),572-584.
  4. Brodalski, D., Brink, H, Curtis, J., Dia, S., Schindelar, J., Shannon, C., & Wolfson, C. (2011). The health communicator’s social media toolkit. Electronic Media Branch, Division of News and Electronic Media, Office of the Associate Director of Communication at the Centers for Disease Control and Prevention (CDC). Retrieved from: http://www.cdc.gov/healthcommunication/toolstemplates/socialmediatoolkit_bm.pdf
  5. Byrd-Bredbenner, C., Maurer, J.  Wheatley, V., Cottone, E., & Clancy, M. (2007). Observed food safety behaviors of young adults. British Food Journal, 109(7),519-530.
  6. Byrd-Bredbenner, C., Schaffner, D. W, & Abbot, J. M. (2010). How food safe is your home kitchen? A self-directed home kitchen audit. Journal of Nutrition Education and Behavior. 42,286-289.
  7. Byrd-Bredbenner, C., Wheatley, V., Schaffer, D., Bruhn, C., Blalock, L., & Maurer, J. (2007). Development of Food Safety Psychosocial Questionnaires for Young Adults. Journal of Food Science Education; 6(2),30-37.
  8. Carmines, E. G., & Zeller, R. A. (1979). Reliability and Validity Assessment. Thousand Oaks, CA: Sage Publications.
  9. Cates, S., Blitstein, J., Hersey, J., Kosa, K., Flicker, L., Morgan, K., & Bell, L. (2014). Addressing the challenges of conducting effective supplemental nutrition assistance program education (SNAP-Ed) evaluations: a step-by-step guide. Prepared by Altarum Institute and RTI International for the USDA, Food and Nutrition Service. Retrieved from: http://www.fns.usda.gov/sites/default/files/SNAPEDWaveII_Guide.pdf
  10. Collins, D. (2003). Pretesting survey instruments: An overview of cognitive methods. Quality of Life Research, 12, 229-238.
  11. Creswell, J. W. (2007). Chapter 3: Designing a Qualitative Study. Qualitative Inquiry and Research Design: Choosing among Five Approaches, 35-41.
  12. Daugherty, S. D., Harris-Kojetin, L., Squire, C., & Jael, E. (2001). Maximizing the quality of cognitive interviewing data: An exploration of three approaches and their informational contributions. Proceedings of the Annual Meeting of the American Statistical Association.
  13. Dharod, J. M., Perez-Escamilla, R., Paciello, S., Venkitanarayanan, K., Bermudez-Millan, A., & Damio, G. (2007). Critical control points for home prepared ‘Chicken and Salad’ in Puerto Rican households. Food Protection Trends, 27(7), 544-522
  14. Food and Drug Administration (FDA). White Paper on Consumer Research and Food Safety Education. (DRAFT).
  15. Grembowski, D. (2001).The practice of health program evaluation. London, U.K.: Sage Publications.
  16. Haeger, H., Lambert, A. D., Kinzie, J., & Gieser, J. (2012). Using cognitive interviews to improve survey instruments. Indiana University Center for Postsecondary Research – Paper presented at the annual forum of the Association for Institutional Research. Retrieved from: http://cpr.indiana.edu/uploads/AIR2012%20Cognitive%20Interviews.pdf
  17. Issel, L. Michele. (2014) Health program planning and evaluation: a practical and systematic approach for community health Sudbury, Mass.: Jones and Bartlett Publishers.
  18. Jobe, J. B. (2003). Cognitive psychology and self-reports: Models and methods. Quality of Life Research, 12, 219-227
  19. Kendall, P. A., Elsbernd, A., Sinclair, K., Schroeder, M., Chen, G., Bergmann, V., & Medeiros, L. C. (2004). Observation versus self-report: Validation of a consumer food behavior questionnaire. Journal of Food Protection, 67(11), 2578-2586.
  20. Medeiros, L.C, Hillers, V. N, Chen, G., Bergmann, V., Kendall, P. & Schroeder, M. (2004) Design and development of food safety knowledge and attitude scales for consumer food safety education. J Am Diet Assoc. 104(11), 1671-1677.
  21. Parra, P. A., Kim,  H., Shapiro, M. A., & Gravani, R. (2014). Home food safety knowledge, risk perception, and practices among Mexican-Americans. Food Control, 37(1), 115-125.
  22. Phang, H. S., & Bruhn, C. M. (2011). Burger preparation: what consumers say and do in the home. Journal of Food Protection, 74(10), 1708-1716.
  23. Takeuchi, M. T., Edlefsen, M., McCurdy, S. M., & Hillers, V. N. (2006). Development and validation of stages-of-change questions to assess consumers’ readiness to use a food thermometer when cooking small cuts of meat. Journal of the American Dietetic Association, 106(2), 262–266.
  24. U.S Department of Health & Human Services. (2009). Code of federal regulations, title 45, public welfare, part 46 protection of human subjects. Retrieved from http://www.hhs.gov/ohrp/regulations-and-policy/regulations/45-cfr-46/index.html
  25. Wallerstein, N. B., and Duran, B. (2006) Using community-based participatory research to address health disparities. Health Promotion and Practice. 7(3), 312-323.
  26. Willis, G. B. (1999). Cognitive interviewing: A “How To” guide. Research Triangle Institute. 1999 Meeting of the American Statistical Association. Research Triangle Park, NC: Research Triangle Institute. Retrieved from: http://appliedresearch.cancer.gov/archive/cognitive/interview.pdf
Download the Full Guide PDF
Access the Toolkit Resources

Copyright © 2023 · Partnership for Food Safety Education

Facebook Twitter Pinterest Linkedin Instagram Youtube Youtube Envelope
Privacy Policy | Terms and Conditions | Disclaimer